Dec 02 13:42:33 crc systemd[1]: Starting Kubernetes Kubelet... Dec 02 13:42:33 crc restorecon[4748]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:33 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:42:34 crc restorecon[4748]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 02 13:42:34 crc kubenswrapper[4900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:42:34 crc kubenswrapper[4900]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 02 13:42:34 crc kubenswrapper[4900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:42:34 crc kubenswrapper[4900]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:42:34 crc kubenswrapper[4900]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 02 13:42:34 crc kubenswrapper[4900]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.718465 4900 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720953 4900 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720972 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720977 4900 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720982 4900 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720986 4900 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720991 4900 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.720998 4900 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721004 4900 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721011 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721017 4900 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721027 4900 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721033 4900 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721038 4900 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721042 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721047 4900 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721051 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721056 4900 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721060 4900 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721065 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721070 4900 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721074 4900 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721079 4900 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721084 4900 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721089 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721094 4900 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721099 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721104 4900 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721109 4900 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721113 4900 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721117 4900 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721121 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721126 4900 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721131 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721136 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721140 4900 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721145 4900 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721151 4900 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721157 4900 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721162 4900 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721167 4900 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721172 4900 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721177 4900 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721181 4900 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721188 4900 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721194 4900 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721201 4900 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721208 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721213 4900 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721218 4900 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721222 4900 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721227 4900 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721231 4900 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721235 4900 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721240 4900 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721244 4900 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721248 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721252 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721256 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721260 4900 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721264 4900 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721269 4900 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721273 4900 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721277 4900 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721281 4900 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721286 4900 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721290 4900 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721294 4900 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721298 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721304 4900 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721310 4900 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.721314 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721579 4900 flags.go:64] FLAG: --address="0.0.0.0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721594 4900 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721605 4900 flags.go:64] FLAG: --anonymous-auth="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721612 4900 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721621 4900 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721626 4900 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721633 4900 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721682 4900 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721687 4900 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721692 4900 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721699 4900 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721705 4900 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721710 4900 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721716 4900 flags.go:64] FLAG: --cgroup-root="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721721 4900 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721726 4900 flags.go:64] FLAG: --client-ca-file="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721731 4900 flags.go:64] FLAG: --cloud-config="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721736 4900 flags.go:64] FLAG: --cloud-provider="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721741 4900 flags.go:64] FLAG: --cluster-dns="[]" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721747 4900 flags.go:64] FLAG: --cluster-domain="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721752 4900 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721757 4900 flags.go:64] FLAG: --config-dir="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721762 4900 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721768 4900 flags.go:64] FLAG: --container-log-max-files="5" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721775 4900 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721781 4900 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721786 4900 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721791 4900 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721800 4900 flags.go:64] FLAG: --contention-profiling="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721805 4900 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721810 4900 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721816 4900 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721821 4900 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721827 4900 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721832 4900 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721838 4900 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721844 4900 flags.go:64] FLAG: --enable-load-reader="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721849 4900 flags.go:64] FLAG: --enable-server="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721854 4900 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721861 4900 flags.go:64] FLAG: --event-burst="100" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721867 4900 flags.go:64] FLAG: --event-qps="50" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721871 4900 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721877 4900 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721882 4900 flags.go:64] FLAG: --eviction-hard="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721888 4900 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721894 4900 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721900 4900 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721905 4900 flags.go:64] FLAG: --eviction-soft="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721910 4900 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721915 4900 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721920 4900 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721926 4900 flags.go:64] FLAG: --experimental-mounter-path="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721931 4900 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721936 4900 flags.go:64] FLAG: --fail-swap-on="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721941 4900 flags.go:64] FLAG: --feature-gates="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721947 4900 flags.go:64] FLAG: --file-check-frequency="20s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721952 4900 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721957 4900 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721963 4900 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721968 4900 flags.go:64] FLAG: --healthz-port="10248" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721974 4900 flags.go:64] FLAG: --help="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721980 4900 flags.go:64] FLAG: --hostname-override="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721985 4900 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721990 4900 flags.go:64] FLAG: --http-check-frequency="20s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.721995 4900 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722000 4900 flags.go:64] FLAG: --image-credential-provider-config="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722005 4900 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722010 4900 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722017 4900 flags.go:64] FLAG: --image-service-endpoint="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722022 4900 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722027 4900 flags.go:64] FLAG: --kube-api-burst="100" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722032 4900 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722038 4900 flags.go:64] FLAG: --kube-api-qps="50" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722043 4900 flags.go:64] FLAG: --kube-reserved="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722048 4900 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722053 4900 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722058 4900 flags.go:64] FLAG: --kubelet-cgroups="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722063 4900 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722068 4900 flags.go:64] FLAG: --lock-file="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722073 4900 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722078 4900 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722083 4900 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722092 4900 flags.go:64] FLAG: --log-json-split-stream="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722097 4900 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722102 4900 flags.go:64] FLAG: --log-text-split-stream="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722107 4900 flags.go:64] FLAG: --logging-format="text" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722113 4900 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722119 4900 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722124 4900 flags.go:64] FLAG: --manifest-url="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722129 4900 flags.go:64] FLAG: --manifest-url-header="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722135 4900 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722141 4900 flags.go:64] FLAG: --max-open-files="1000000" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722148 4900 flags.go:64] FLAG: --max-pods="110" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722153 4900 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722159 4900 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722164 4900 flags.go:64] FLAG: --memory-manager-policy="None" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722168 4900 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722175 4900 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722180 4900 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722185 4900 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722198 4900 flags.go:64] FLAG: --node-status-max-images="50" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722203 4900 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722209 4900 flags.go:64] FLAG: --oom-score-adj="-999" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722214 4900 flags.go:64] FLAG: --pod-cidr="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722219 4900 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722228 4900 flags.go:64] FLAG: --pod-manifest-path="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722233 4900 flags.go:64] FLAG: --pod-max-pids="-1" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722238 4900 flags.go:64] FLAG: --pods-per-core="0" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722244 4900 flags.go:64] FLAG: --port="10250" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722250 4900 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722255 4900 flags.go:64] FLAG: --provider-id="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722260 4900 flags.go:64] FLAG: --qos-reserved="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722266 4900 flags.go:64] FLAG: --read-only-port="10255" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722272 4900 flags.go:64] FLAG: --register-node="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722278 4900 flags.go:64] FLAG: --register-schedulable="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722283 4900 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722293 4900 flags.go:64] FLAG: --registry-burst="10" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722298 4900 flags.go:64] FLAG: --registry-qps="5" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722304 4900 flags.go:64] FLAG: --reserved-cpus="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722309 4900 flags.go:64] FLAG: --reserved-memory="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722315 4900 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722320 4900 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722325 4900 flags.go:64] FLAG: --rotate-certificates="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722331 4900 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722336 4900 flags.go:64] FLAG: --runonce="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722341 4900 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722346 4900 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722351 4900 flags.go:64] FLAG: --seccomp-default="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722356 4900 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722361 4900 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722366 4900 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722371 4900 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722376 4900 flags.go:64] FLAG: --storage-driver-password="root" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722382 4900 flags.go:64] FLAG: --storage-driver-secure="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722387 4900 flags.go:64] FLAG: --storage-driver-table="stats" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722392 4900 flags.go:64] FLAG: --storage-driver-user="root" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722397 4900 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722402 4900 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722407 4900 flags.go:64] FLAG: --system-cgroups="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722412 4900 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722420 4900 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722425 4900 flags.go:64] FLAG: --tls-cert-file="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722430 4900 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722436 4900 flags.go:64] FLAG: --tls-min-version="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722441 4900 flags.go:64] FLAG: --tls-private-key-file="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722447 4900 flags.go:64] FLAG: --topology-manager-policy="none" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722452 4900 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722457 4900 flags.go:64] FLAG: --topology-manager-scope="container" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722463 4900 flags.go:64] FLAG: --v="2" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722472 4900 flags.go:64] FLAG: --version="false" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722480 4900 flags.go:64] FLAG: --vmodule="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722486 4900 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.722492 4900 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722838 4900 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722848 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722854 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722859 4900 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722866 4900 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722871 4900 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722876 4900 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722880 4900 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722885 4900 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722890 4900 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722894 4900 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722899 4900 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722903 4900 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722908 4900 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722912 4900 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722917 4900 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722921 4900 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722926 4900 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722930 4900 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722935 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722939 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722944 4900 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722948 4900 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722955 4900 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722966 4900 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722971 4900 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722976 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722982 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722987 4900 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722991 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.722996 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723000 4900 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723005 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723010 4900 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723014 4900 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723019 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723023 4900 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723029 4900 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723033 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723038 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723042 4900 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723047 4900 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723052 4900 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723057 4900 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723062 4900 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723069 4900 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723075 4900 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723080 4900 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723087 4900 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723092 4900 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723097 4900 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723102 4900 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723106 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723111 4900 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723116 4900 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723120 4900 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723127 4900 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723132 4900 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723137 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723143 4900 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723148 4900 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723152 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723157 4900 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723161 4900 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723166 4900 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723171 4900 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723176 4900 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723180 4900 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723185 4900 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723189 4900 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.723194 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.723375 4900 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.737714 4900 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.737794 4900 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.737947 4900 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.737969 4900 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.737979 4900 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.737988 4900 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.737997 4900 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738006 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738017 4900 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738025 4900 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738035 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738044 4900 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738052 4900 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738060 4900 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738069 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738077 4900 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738084 4900 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738092 4900 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738100 4900 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738108 4900 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738116 4900 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738125 4900 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738135 4900 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738144 4900 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738156 4900 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738170 4900 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738179 4900 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738187 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738196 4900 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738205 4900 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738217 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738225 4900 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738234 4900 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738242 4900 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738252 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738263 4900 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738273 4900 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738282 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738292 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738300 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738310 4900 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738320 4900 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738329 4900 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738338 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738348 4900 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738357 4900 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738366 4900 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738375 4900 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738384 4900 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738392 4900 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738400 4900 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738408 4900 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738419 4900 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738429 4900 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738438 4900 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738447 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738458 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738468 4900 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738477 4900 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738485 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738493 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738501 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738509 4900 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738517 4900 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738524 4900 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738532 4900 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738540 4900 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738547 4900 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738555 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738563 4900 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738573 4900 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738583 4900 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738595 4900 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.738611 4900 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738924 4900 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738939 4900 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738948 4900 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738958 4900 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738966 4900 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738975 4900 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738983 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.738995 4900 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739006 4900 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739014 4900 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739022 4900 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739030 4900 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739038 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739046 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739054 4900 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739062 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739070 4900 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739078 4900 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739087 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739095 4900 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739103 4900 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739111 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739119 4900 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739127 4900 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739135 4900 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739143 4900 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739151 4900 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739158 4900 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739189 4900 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739198 4900 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739207 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739216 4900 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739224 4900 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739232 4900 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739242 4900 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739251 4900 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739259 4900 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739268 4900 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739277 4900 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739285 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739293 4900 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739301 4900 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739309 4900 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739317 4900 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739325 4900 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739333 4900 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739340 4900 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739348 4900 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739356 4900 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739364 4900 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739372 4900 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739380 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739389 4900 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739398 4900 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739409 4900 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739419 4900 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739430 4900 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739439 4900 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739449 4900 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739457 4900 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739466 4900 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739474 4900 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739483 4900 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739492 4900 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739501 4900 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739509 4900 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739517 4900 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739525 4900 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739533 4900 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739541 4900 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.739549 4900 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.739562 4900 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.740228 4900 server.go:940] "Client rotation is on, will bootstrap in background" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.745263 4900 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.745421 4900 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.746313 4900 server.go:997] "Starting client certificate rotation" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.746363 4900 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.746933 4900 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 00:50:44.707678284 +0000 UTC Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.747108 4900 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 179h8m9.960577705s for next certificate rotation Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.754510 4900 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.757377 4900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.769938 4900 log.go:25] "Validated CRI v1 runtime API" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.791628 4900 log.go:25] "Validated CRI v1 image API" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.793976 4900 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.797361 4900 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-02-13-37-54-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.797409 4900 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.830824 4900 manager.go:217] Machine: {Timestamp:2025-12-02 13:42:34.828618524 +0000 UTC m=+0.244432405 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:67abec4e-a00c-4d58-8a63-f5484bdca5e1 BootID:0634cfab-4708-456e-8fb1-d034c189ea37 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2f:a8:1b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2f:a8:1b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6d:62:b8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5a:55:74 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e9:41:6d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:16:ed:65 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:46:e3:b8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:c0:79:7d:61:59 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:dd:f7:ad:e4:d4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.831288 4900 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.831709 4900 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.832757 4900 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.833133 4900 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.833204 4900 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.833573 4900 topology_manager.go:138] "Creating topology manager with none policy" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.833594 4900 container_manager_linux.go:303] "Creating device plugin manager" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.833995 4900 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.834066 4900 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.834602 4900 state_mem.go:36] "Initialized new in-memory state store" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.834783 4900 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.835810 4900 kubelet.go:418] "Attempting to sync node with API server" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.835848 4900 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.835892 4900 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.835916 4900 kubelet.go:324] "Adding apiserver pod source" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.835937 4900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.838451 4900 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.839356 4900 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.840360 4900 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841013 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841039 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841047 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841056 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841070 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841079 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841087 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841099 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841110 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841121 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841164 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841173 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.841150 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.841183 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841363 4900 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.841361 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.841343 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.841901 4900 server.go:1280] "Started kubelet" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.842284 4900 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.842394 4900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.842436 4900 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.843400 4900 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 02 13:42:34 crc systemd[1]: Started Kubernetes Kubelet. Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.846740 4900 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.846788 4900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.846858 4900 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:28:25.105220899 +0000 UTC Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.846960 4900 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 768h45m50.258267758s for next certificate rotation Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.848669 4900 server.go:460] "Adding debug handlers to kubelet server" Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.850033 4900 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.850166 4900 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.850189 4900 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.850365 4900 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.850518 4900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d69cc38892fc5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:42:34.841862085 +0000 UTC m=+0.257675936,LastTimestamp:2025-12-02 13:42:34.841862085 +0000 UTC m=+0.257675936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852169 4900 factory.go:55] Registering systemd factory Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.852139 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.852270 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852223 4900 factory.go:221] Registration of the systemd container factory successfully Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.852285 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852730 4900 factory.go:153] Registering CRI-O factory Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852776 4900 factory.go:221] Registration of the crio container factory successfully Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852895 4900 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852931 4900 factory.go:103] Registering Raw factory Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.852957 4900 manager.go:1196] Started watching for new ooms in manager Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.854057 4900 manager.go:319] Starting recovery of all containers Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875709 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875783 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875799 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875816 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875832 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875854 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875869 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875914 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875934 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875949 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875964 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875978 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.875993 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876012 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876026 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876047 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876062 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876077 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876096 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876110 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876124 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876139 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876154 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876168 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876186 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876201 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876218 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876233 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876251 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876266 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876280 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876299 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876320 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876704 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876720 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876736 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876750 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876764 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876783 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876799 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876814 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876833 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876847 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876861 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876876 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876889 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876903 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876951 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876969 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876983 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.876997 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877010 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877032 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877049 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877066 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877081 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877096 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877111 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877124 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877140 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877154 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877169 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877182 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877199 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877214 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877228 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877244 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877258 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877272 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877286 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877305 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877318 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877333 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877350 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877364 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877378 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877392 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877407 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877424 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877440 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877455 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877469 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877484 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877498 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877512 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877528 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877542 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877560 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877574 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877589 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877605 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877621 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877635 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877695 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877711 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877726 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877741 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877755 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877770 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877784 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877804 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877818 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877831 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877845 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877865 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877880 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877897 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877912 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877927 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877942 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877956 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877973 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.877988 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878003 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878017 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878032 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878048 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878062 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878075 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878089 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878104 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878119 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878132 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878147 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878161 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878175 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878189 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878204 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878218 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878233 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.878247 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880026 4900 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880059 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880089 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880107 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880121 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880134 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880150 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880163 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880178 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880190 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880202 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880216 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880233 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880245 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880257 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880270 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880281 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880295 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880307 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880322 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880335 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880349 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880362 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880375 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880390 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880404 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880416 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880430 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880446 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880461 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880474 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880488 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880523 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880539 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880552 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880566 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880580 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880596 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880610 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880626 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880655 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880669 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880682 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880696 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880709 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880721 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880733 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880747 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880759 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880771 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880782 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880794 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880825 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880838 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880851 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880863 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880877 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880890 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880901 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880916 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880930 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880942 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.880956 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881006 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881018 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881032 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881047 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881065 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881080 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881093 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881107 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881123 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881136 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881148 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881163 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881175 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881190 4900 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881202 4900 reconstruct.go:97] "Volume reconstruction finished" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.881212 4900 reconciler.go:26] "Reconciler: start to sync state" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.902462 4900 manager.go:324] Recovery completed Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.906945 4900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.908574 4900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.908683 4900 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.908753 4900 kubelet.go:2335] "Starting kubelet main sync loop" Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.908834 4900 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 02 13:42:34 crc kubenswrapper[4900]: W1202 13:42:34.910277 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.910358 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.918605 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.920140 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.920172 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.920185 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.921546 4900 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.921564 4900 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.921588 4900 state_mem.go:36] "Initialized new in-memory state store" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.933874 4900 policy_none.go:49] "None policy: Start" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.934727 4900 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.934755 4900 state_mem.go:35] "Initializing new in-memory state store" Dec 02 13:42:34 crc kubenswrapper[4900]: E1202 13:42:34.950636 4900 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.999051 4900 manager.go:334] "Starting Device Plugin manager" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.999106 4900 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.999121 4900 server.go:79] "Starting device plugin registration server" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.999589 4900 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.999609 4900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 02 13:42:34 crc kubenswrapper[4900]: I1202 13:42:34.999882 4900 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.000111 4900 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.000145 4900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.009486 4900 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.009589 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.011048 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.011099 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.011111 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.011345 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.016938 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.017052 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.018527 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.018560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.018570 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.019621 4900 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.019843 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.019884 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.019905 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.020166 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.020689 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.020789 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022062 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022103 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022121 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022278 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022892 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.022946 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.023068 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.023165 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.023428 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.023525 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.023551 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.023882 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.024095 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.024185 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.024503 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.024539 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.024560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025155 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025199 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025393 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025424 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025468 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025516 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.025532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.026328 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.026361 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.026379 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.053914 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.084587 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.085092 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.085949 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.086438 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.086734 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.086945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.087246 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.087675 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.087990 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.088207 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.088474 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.088753 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.089038 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.089268 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.089573 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.100184 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.101326 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.101387 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.101412 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.101451 4900 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.102402 4900 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191489 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191694 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191743 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191799 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191810 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191910 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.191844 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192030 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192057 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192075 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192096 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192122 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192152 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192160 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192163 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192162 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192200 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192261 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192284 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192339 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192291 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192392 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192416 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192487 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192465 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192535 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192437 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192712 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.192711 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.302863 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.304836 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.304898 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.304923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.304971 4900 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.305739 4900 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.348598 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.369259 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.378551 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: W1202 13:42:35.384154 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-079b906609dee63306da603c0cd4404c1f4fa0fa67be3fdca77e83dddb7e8a75 WatchSource:0}: Error finding container 079b906609dee63306da603c0cd4404c1f4fa0fa67be3fdca77e83dddb7e8a75: Status 404 returned error can't find the container with id 079b906609dee63306da603c0cd4404c1f4fa0fa67be3fdca77e83dddb7e8a75 Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.397938 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.406303 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:42:35 crc kubenswrapper[4900]: W1202 13:42:35.428277 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-070e0a034dd940df9617b6faee01d7393ff2d629ccee5150dc433defc9b4d893 WatchSource:0}: Error finding container 070e0a034dd940df9617b6faee01d7393ff2d629ccee5150dc433defc9b4d893: Status 404 returned error can't find the container with id 070e0a034dd940df9617b6faee01d7393ff2d629ccee5150dc433defc9b4d893 Dec 02 13:42:35 crc kubenswrapper[4900]: W1202 13:42:35.437786 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-582c6063d0a1308de79230d78f7a51f52c9791d7bf13ceb46c05abc2dfbba489 WatchSource:0}: Error finding container 582c6063d0a1308de79230d78f7a51f52c9791d7bf13ceb46c05abc2dfbba489: Status 404 returned error can't find the container with id 582c6063d0a1308de79230d78f7a51f52c9791d7bf13ceb46c05abc2dfbba489 Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.455505 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.706734 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.708808 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.708863 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.708879 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.708911 4900 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.709529 4900 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 02 13:42:35 crc kubenswrapper[4900]: W1202 13:42:35.732378 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.732495 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:35 crc kubenswrapper[4900]: W1202 13:42:35.757586 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:35 crc kubenswrapper[4900]: E1202 13:42:35.757691 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.843620 4900 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.916623 4900 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="21ee6566e0b5583b6716e1a6bbea2a90a1e8d180976f95562c6faa6adfde218b" exitCode=0 Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.916731 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"21ee6566e0b5583b6716e1a6bbea2a90a1e8d180976f95562c6faa6adfde218b"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.916885 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"079b906609dee63306da603c0cd4404c1f4fa0fa67be3fdca77e83dddb7e8a75"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.917105 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.918356 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.918402 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.918417 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.919901 4900 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979" exitCode=0 Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.919973 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.920004 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"582c6063d0a1308de79230d78f7a51f52c9791d7bf13ceb46c05abc2dfbba489"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.920083 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.920989 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.921028 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.921042 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.921776 4900 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396" exitCode=0 Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.921820 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.921865 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"070e0a034dd940df9617b6faee01d7393ff2d629ccee5150dc433defc9b4d893"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.921970 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.922956 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.922983 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.923008 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.923596 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.923637 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2fcc30c1648b2fb1784ff68850bacbf0dfce9b03f73c6431f61125851faa74ff"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.925423 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc" exitCode=0 Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.925456 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.925477 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0341db68ace8dd691b1b127d6075ee73156dc644a3f26b862d95c39538989ab8"} Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.925568 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.926859 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.926889 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.926916 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.930029 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.931812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.931855 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:35 crc kubenswrapper[4900]: I1202 13:42:35.931868 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:36 crc kubenswrapper[4900]: E1202 13:42:36.257056 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Dec 02 13:42:36 crc kubenswrapper[4900]: W1202 13:42:36.355083 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:36 crc kubenswrapper[4900]: E1202 13:42:36.355190 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:36 crc kubenswrapper[4900]: W1202 13:42:36.387064 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 02 13:42:36 crc kubenswrapper[4900]: E1202 13:42:36.387165 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.510164 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.511939 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.511980 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.511992 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.512022 4900 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.932516 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"78b6c26f99ca34eb2a84a471e4a5ba769d4c89f6d7f4656d50865c4893de6d2f"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.932701 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.934227 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.934294 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.934308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.937666 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.937709 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.937724 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.937815 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.938524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.938550 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.938560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941032 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941055 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941120 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941137 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941840 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941878 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.941892 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.944216 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.944260 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.944281 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.944294 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.946665 4900 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="01ad120ec36b820587011bac6b3285fb7a917dc2d375dda0131a4b3a2f0e5d0f" exitCode=0 Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.946701 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"01ad120ec36b820587011bac6b3285fb7a917dc2d375dda0131a4b3a2f0e5d0f"} Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.946824 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.947625 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.947667 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:36 crc kubenswrapper[4900]: I1202 13:42:36.947678 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.954994 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33"} Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.955220 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.957144 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.957197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.957218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.959258 4900 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a17a0448f21ab5931068a468d29fa98efaa96991dca979de162ca47c0511a608" exitCode=0 Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.959355 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a17a0448f21ab5931068a468d29fa98efaa96991dca979de162ca47c0511a608"} Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.959449 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.959581 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.961227 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.961275 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.961297 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.961300 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.961343 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:37 crc kubenswrapper[4900]: I1202 13:42:37.961371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.955688 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.966918 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"418688a018c3914b70b23c4d970cb615fa324a9c96f315bc16b745933c319fbe"} Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.966965 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9e242ab9d47259a5fc883ba2497247fcc0a3287743024fa85ecc7ca85e79ff7"} Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.966979 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b7e4940b5eda6fcf6f9ac59ba70912ea575959193355f9890d3cccfc40764b9"} Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.967008 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.967314 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.968007 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.968029 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:38 crc kubenswrapper[4900]: I1202 13:42:38.968039 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.483750 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.978288 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9d4ef51b446368161d9b6ddb8f7c6ba4c61a6127bad7aabf9f5605cc004bfe5"} Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.978376 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a972d1403b1e986ffe153a1c759d2f43664b03431d927c18c98ad2b06389cb2"} Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.978416 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.978417 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.980118 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.980173 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.980228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.980250 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.980194 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:39 crc kubenswrapper[4900]: I1202 13:42:39.980329 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.067874 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.281389 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.281701 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.283370 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.283427 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.283444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.290470 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.981868 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.981994 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.981994 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983474 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983557 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983576 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983945 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.983918 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.984056 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:40 crc kubenswrapper[4900]: I1202 13:42:40.984086 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.004543 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.985463 4900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.985562 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.985627 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.987464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.987527 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.987546 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.987618 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.987704 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:41 crc kubenswrapper[4900]: I1202 13:42:41.987783 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:44 crc kubenswrapper[4900]: I1202 13:42:44.538482 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:44 crc kubenswrapper[4900]: I1202 13:42:44.538847 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:44 crc kubenswrapper[4900]: I1202 13:42:44.540805 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:44 crc kubenswrapper[4900]: I1202 13:42:44.540884 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:44 crc kubenswrapper[4900]: I1202 13:42:44.540909 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:45 crc kubenswrapper[4900]: E1202 13:42:45.020326 4900 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.433310 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.433682 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.435350 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.435442 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.435464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.509063 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.509290 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.511392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.511476 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.511499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.516431 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.998324 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.999769 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.999853 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:45 crc kubenswrapper[4900]: I1202 13:42:45.999877 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:46 crc kubenswrapper[4900]: E1202 13:42:46.512979 4900 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 02 13:42:46 crc kubenswrapper[4900]: I1202 13:42:46.845152 4900 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 02 13:42:47 crc kubenswrapper[4900]: W1202 13:42:47.467020 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 02 13:42:47 crc kubenswrapper[4900]: I1202 13:42:47.467177 4900 trace.go:236] Trace[1023141947]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:42:37.465) (total time: 10001ms): Dec 02 13:42:47 crc kubenswrapper[4900]: Trace[1023141947]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:42:47.466) Dec 02 13:42:47 crc kubenswrapper[4900]: Trace[1023141947]: [10.001764987s] [10.001764987s] END Dec 02 13:42:47 crc kubenswrapper[4900]: E1202 13:42:47.467221 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 02 13:42:47 crc kubenswrapper[4900]: W1202 13:42:47.588160 4900 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 02 13:42:47 crc kubenswrapper[4900]: I1202 13:42:47.588276 4900 trace.go:236] Trace[1159619449]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:42:37.586) (total time: 10001ms): Dec 02 13:42:47 crc kubenswrapper[4900]: Trace[1159619449]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:42:47.588) Dec 02 13:42:47 crc kubenswrapper[4900]: Trace[1159619449]: [10.00161631s] [10.00161631s] END Dec 02 13:42:47 crc kubenswrapper[4900]: E1202 13:42:47.588300 4900 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 02 13:42:47 crc kubenswrapper[4900]: I1202 13:42:47.803985 4900 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 13:42:47 crc kubenswrapper[4900]: I1202 13:42:47.804086 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 13:42:47 crc kubenswrapper[4900]: I1202 13:42:47.813312 4900 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 13:42:47 crc kubenswrapper[4900]: I1202 13:42:47.813391 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.113541 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.115572 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.115631 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.115690 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.115736 4900 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.509585 4900 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:42:48 crc kubenswrapper[4900]: I1202 13:42:48.509725 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.494568 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.495054 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.497566 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.497639 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.497696 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.501978 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.681983 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.682343 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.684484 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.684560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.684582 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:49 crc kubenswrapper[4900]: I1202 13:42:49.717036 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.010068 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.010122 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.011902 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.011963 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.012013 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.012031 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.011983 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.012125 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:50 crc kubenswrapper[4900]: I1202 13:42:50.027354 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 02 13:42:51 crc kubenswrapper[4900]: I1202 13:42:51.013846 4900 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:42:51 crc kubenswrapper[4900]: I1202 13:42:51.015599 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:51 crc kubenswrapper[4900]: I1202 13:42:51.015712 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:51 crc kubenswrapper[4900]: I1202 13:42:51.015740 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:51 crc kubenswrapper[4900]: I1202 13:42:51.075202 4900 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 13:42:51 crc kubenswrapper[4900]: I1202 13:42:51.276503 4900 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 13:42:52 crc kubenswrapper[4900]: E1202 13:42:52.804567 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.808769 4900 trace.go:236] Trace[978404463]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:42:38.350) (total time: 14457ms): Dec 02 13:42:52 crc kubenswrapper[4900]: Trace[978404463]: ---"Objects listed" error: 14457ms (13:42:52.808) Dec 02 13:42:52 crc kubenswrapper[4900]: Trace[978404463]: [14.457983493s] [14.457983493s] END Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.808824 4900 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.808929 4900 trace.go:236] Trace[793042412]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:42:38.544) (total time: 14264ms): Dec 02 13:42:52 crc kubenswrapper[4900]: Trace[793042412]: ---"Objects listed" error: 14264ms (13:42:52.808) Dec 02 13:42:52 crc kubenswrapper[4900]: Trace[793042412]: [14.264802893s] [14.264802893s] END Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.808958 4900 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.810583 4900 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.846718 4900 apiserver.go:52] "Watching apiserver" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.850123 4900 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.850742 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.851303 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.851381 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.851451 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.852466 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:52 crc kubenswrapper[4900]: E1202 13:42:52.852722 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:42:52 crc kubenswrapper[4900]: E1202 13:42:52.852621 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.853336 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:52 crc kubenswrapper[4900]: E1202 13:42:52.853454 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.853363 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.860732 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.861287 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.861767 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.862009 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.862371 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.862765 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.863136 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.863506 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.863831 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.870019 4900 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49672->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.870317 4900 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38112->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.870760 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49672->192.168.126.11:17697: read: connection reset by peer" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.870907 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38112->192.168.126.11:17697: read: connection reset by peer" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.871339 4900 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.871371 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.904828 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.924558 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.940788 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.951795 4900 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.957540 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.977736 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:52 crc kubenswrapper[4900]: I1202 13:42:52.992338 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012542 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012601 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012629 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012667 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012691 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012714 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012733 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012754 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012776 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012796 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012819 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012856 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012895 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012924 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012951 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012972 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.012992 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013015 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013163 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013189 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013210 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013236 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013259 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013279 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013306 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013333 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013372 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013393 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013414 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013436 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013457 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013477 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013498 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013521 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013547 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013568 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013599 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013620 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013661 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013681 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013705 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013745 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013767 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013789 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013811 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013832 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013853 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013887 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013907 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013928 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.013975 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014000 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014021 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014044 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014066 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014087 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014111 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014131 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014153 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014176 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014198 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014229 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014252 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014277 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014298 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014322 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014345 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014369 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014397 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014420 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014442 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014464 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014485 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014509 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014532 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014557 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014580 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014602 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014625 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014664 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014689 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014714 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014736 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014760 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014781 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014802 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014826 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014849 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014872 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014904 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014931 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014955 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.014978 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015007 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015033 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015057 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015081 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015105 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015129 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015153 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015175 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015196 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015218 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015240 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015262 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015284 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015306 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015329 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015354 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015376 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015398 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015422 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015448 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015469 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015489 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015512 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015534 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015555 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015579 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015603 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015624 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015677 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015700 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015725 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015747 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015770 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015793 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015815 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015837 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015862 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015887 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015912 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015934 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015957 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.015978 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016001 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016028 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016051 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016073 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016096 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016121 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016144 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016171 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016195 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016217 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016238 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016258 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016283 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016306 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016327 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016349 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016371 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016394 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016418 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016439 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016462 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016484 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016507 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016529 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016551 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016574 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016598 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016619 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016664 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016688 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016712 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016743 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016767 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016790 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016812 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016836 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016879 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.016901 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017103 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017129 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017153 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017178 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017203 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017225 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017249 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017272 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017293 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017318 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017348 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017374 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017396 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017422 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017445 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017468 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017490 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017512 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017534 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017557 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017579 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017601 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017686 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017720 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017750 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017779 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017805 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017833 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017861 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017889 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.017917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018122 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018148 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018173 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018199 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018224 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018451 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018446 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.018823 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.020909 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.023278 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.023341 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.023823 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.024188 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.024453 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.024519 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025003 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025026 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025109 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025365 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025415 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025512 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025657 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025976 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.025982 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.026418 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.026603 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027022 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027111 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027287 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027445 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027567 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027917 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.027989 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.028120 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.028406 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.028503 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.029020 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.029298 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.029620 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.030819 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.030853 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.031094 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.031412 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.031566 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.032273 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.032526 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.032571 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.032796 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.033092 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.033453 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.033680 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.033773 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.033890 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.034249 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.034560 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.034636 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.034704 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.034805 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.035029 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.035030 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.035386 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:42:53.535360761 +0000 UTC m=+18.951174612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.035456 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.036245 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.036879 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.037245 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.037464 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.037678 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.037885 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.037975 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.038223 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.038442 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.039135 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.039201 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.039378 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.039393 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.039586 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.040263 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.040429 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.040552 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.041280 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.042928 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.043081 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.043329 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.043371 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.043485 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.043904 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044008 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044181 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044191 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044270 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044314 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044379 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044464 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044480 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044515 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.045901 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.044512 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.043791 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.048536 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.048795 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.048844 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.048915 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.049254 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.049464 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.049563 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.049670 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.049984 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.050252 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.050411 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.050436 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.050531 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.050661 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:53.550600547 +0000 UTC m=+18.966414408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.050763 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.050880 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.050549 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.051311 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.052011 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.052427 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.052780 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.053502 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.053549 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.053620 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.053625 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.054079 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.054180 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.054275 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.054369 4900 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.054614 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.055489 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.056732 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.057023 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.057058 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.057124 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.057436 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.057688 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.058272 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.058393 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.058579 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.058856 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.058880 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.058944 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.059198 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.059455 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.059791 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.059859 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.060156 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.060444 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.060526 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.062241 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:53.562186928 +0000 UTC m=+18.978000779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.064294 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.069855 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.076637 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.077079 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.077684 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.078701 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.079186 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.079470 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.079538 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.080355 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.080786 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.081779 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33" exitCode=255 Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.081827 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33"} Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.083411 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.083436 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.083450 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.083506 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:53.583491206 +0000 UTC m=+18.999305057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.083598 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.086933 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.086994 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.088418 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.088506 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.089035 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.089189 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.090098 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.090349 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.090450 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.092004 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.093479 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.093513 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.093530 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.093599 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:53.593575845 +0000 UTC m=+19.009389716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.093982 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.094072 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.097220 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.098110 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.098367 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.098602 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.098794 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.099043 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.099595 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.099779 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.099978 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.100536 4900 scope.go:117] "RemoveContainer" containerID="97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.100629 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.100667 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.101223 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.103859 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.103877 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.104471 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.104662 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.104751 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.106666 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.107215 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.108082 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.108212 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.108227 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.109192 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.109377 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.111389 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.113279 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.114348 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.117057 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.117590 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.117794 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.118223 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.118722 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119161 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119361 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119548 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119620 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119789 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119825 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119842 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119857 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119872 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119886 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119901 4900 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119916 4900 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119931 4900 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119946 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119959 4900 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119974 4900 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.119989 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120002 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120016 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120029 4900 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120042 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120055 4900 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120068 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120081 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120094 4900 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120108 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120121 4900 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120135 4900 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120148 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120162 4900 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120174 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120186 4900 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120199 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120214 4900 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120226 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120239 4900 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120253 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120267 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120279 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120293 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120306 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120322 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120336 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120348 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120361 4900 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120375 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120388 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120401 4900 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120415 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120428 4900 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120443 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120456 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120469 4900 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120483 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120498 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120512 4900 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120526 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120539 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120552 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120564 4900 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120567 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120578 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120703 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120718 4900 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120731 4900 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120743 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120755 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120782 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120795 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120810 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120822 4900 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120851 4900 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120863 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120876 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121240 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121273 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121285 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121298 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121312 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121324 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121336 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121348 4900 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121359 4900 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121370 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121382 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121393 4900 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121407 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121420 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121432 4900 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121443 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121455 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121466 4900 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121478 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121499 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121510 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121521 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121568 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121614 4900 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121684 4900 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121697 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121708 4900 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121720 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.121731 4900 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.120618 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122472 4900 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122533 4900 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122549 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122564 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122589 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122602 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122619 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122633 4900 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122661 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122674 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122686 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122460 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122700 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122713 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122727 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122739 4900 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122751 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122764 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122776 4900 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122787 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122799 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122812 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122823 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122836 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122849 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122861 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122873 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122885 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122898 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122912 4900 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122924 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122935 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122949 4900 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122961 4900 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122973 4900 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122985 4900 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.122997 4900 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123008 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123021 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123033 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123044 4900 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123055 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123069 4900 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123081 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123092 4900 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123106 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123117 4900 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123134 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123145 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123157 4900 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123168 4900 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123180 4900 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123191 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123203 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123215 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123227 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123239 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123251 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123263 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123275 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123287 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123298 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123309 4900 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123320 4900 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123331 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123345 4900 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123358 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123381 4900 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123393 4900 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123404 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123415 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123427 4900 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123438 4900 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123449 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123461 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123472 4900 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123483 4900 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123494 4900 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123507 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123518 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123529 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123542 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123554 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123566 4900 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123577 4900 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123588 4900 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.123599 4900 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.124400 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.124543 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.125335 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.125368 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.125564 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.126984 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.132316 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.140336 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.144427 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.148827 4900 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.148903 4900 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.148920 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.150838 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.150869 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.150879 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.150899 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.150924 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.152763 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.171140 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.175407 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.179335 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.179365 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.179376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.179392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.179405 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.184559 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.189763 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.200877 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.207772 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.215233 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.215266 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.215278 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.215296 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.215311 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.218315 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225108 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225144 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225154 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225165 4900 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225175 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225184 4900 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225193 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225204 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225215 4900 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.225224 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.228256 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.228735 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.232003 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.232038 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.232050 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.232067 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.232079 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.242783 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: W1202 13:42:53.244791 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-be281385a3df742edcc5aecadb1d255471f0b3d3aa389a6749e5bf45cb45444e WatchSource:0}: Error finding container be281385a3df742edcc5aecadb1d255471f0b3d3aa389a6749e5bf45cb45444e: Status 404 returned error can't find the container with id be281385a3df742edcc5aecadb1d255471f0b3d3aa389a6749e5bf45cb45444e Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.251161 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.251192 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.251201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.251219 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.251231 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.262794 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.262925 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.265067 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.265096 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.265122 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.265139 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.265150 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.368083 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.368115 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.368126 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.368143 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.368154 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.470708 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.470786 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.470824 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.470852 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.470871 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.581194 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.581243 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.581252 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.581271 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.581282 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.628710 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.628804 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.628834 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.628859 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.628886 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.628981 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:42:54.628954922 +0000 UTC m=+20.044768773 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629082 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629114 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:54.629107917 +0000 UTC m=+20.044921768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629107 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629160 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629168 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629272 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629291 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629306 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629349 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:54.629312932 +0000 UTC m=+20.045126943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629396 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:54.629365384 +0000 UTC m=+20.045179275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629186 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: E1202 13:42:53.629507 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:54.629493398 +0000 UTC m=+20.045307469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.685315 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.685361 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.685375 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.685392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.685403 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.789189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.789244 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.789260 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.789284 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.789301 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.892709 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.892757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.892777 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.892801 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.892819 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.995697 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.995746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.995759 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.995779 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:53 crc kubenswrapper[4900]: I1202 13:42:53.995798 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:53Z","lastTransitionTime":"2025-12-02T13:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.085771 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"be281385a3df742edcc5aecadb1d255471f0b3d3aa389a6749e5bf45cb45444e"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.087853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.087885 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.087898 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d0b80659e8be1ded959ef9e9ecbccbd05d16c0bb02a442ab52600a80c6b60796"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.090261 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.090292 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e2f185cdc7560a3c914d4c496c3c79ffb23b244d65c706fedbe1277831c43ae8"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.092436 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.093990 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.094461 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.098260 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.098315 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.098329 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.098343 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.098355 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.108852 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.129867 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.149533 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.169180 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.187841 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.200662 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.200714 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.200742 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.200760 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.200772 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.203678 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.215502 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.230615 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.245173 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.262049 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.281218 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.294918 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.304268 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.304328 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.304354 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.304380 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.304404 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.310581 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.329914 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.407266 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.407339 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.407360 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.407397 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.407415 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.510617 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.510716 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.510735 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.510764 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.510787 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.613926 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.613979 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.613989 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.614008 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.614019 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.638489 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.638694 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.638780 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:42:56.638732414 +0000 UTC m=+22.054546305 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.638865 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.638921 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.638959 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639037 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:56.639009201 +0000 UTC m=+22.054823052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639062 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639090 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.639087 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639106 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639143 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639165 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:56.639145975 +0000 UTC m=+22.054959836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639203 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639222 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639236 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639205 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:56.639190587 +0000 UTC m=+22.055004478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.639280 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:56.639272659 +0000 UTC m=+22.055086510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.718604 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.718707 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.718731 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.718775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.718798 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.822943 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.823016 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.823033 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.823059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.823081 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.909845 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.909969 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.909860 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.910080 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.910234 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:42:54 crc kubenswrapper[4900]: E1202 13:42:54.910566 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.918860 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.919957 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.922377 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.924057 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.926135 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.926378 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.926434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.926453 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.926481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.926499 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:54Z","lastTransitionTime":"2025-12-02T13:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.927239 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.928473 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.930683 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.932420 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.934420 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.935580 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.936060 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.937887 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.938930 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.940077 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.943798 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.944572 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.945842 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.946382 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.947136 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.952263 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.952934 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.954314 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.954881 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.956237 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.956863 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.957669 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.959197 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.959840 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.961358 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.962101 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.963387 4900 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.963586 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.965720 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.966890 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.967543 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.969074 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.969604 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.970496 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.971792 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.972635 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.974021 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.974623 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.976060 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.976915 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.978364 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.979057 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.980422 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.980946 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.982007 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.982492 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.983300 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.983859 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.984705 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.985244 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.985700 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 02 13:42:54 crc kubenswrapper[4900]: I1202 13:42:54.989554 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.008742 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.029123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.029192 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.029217 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.029249 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.029273 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.035111 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.062013 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.082190 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.132109 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.132164 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.132182 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.132204 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.132223 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.236130 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.236195 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.236211 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.236231 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.236244 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.340106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.340191 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.340212 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.340240 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.340270 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.443055 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.443168 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.443196 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.443230 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.443254 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.519483 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.526449 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.532037 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.545107 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.546737 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.546804 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.546823 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.546847 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.546866 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.566243 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.592531 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.614496 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.636331 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.650138 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.650203 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.650222 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.650248 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.650267 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.660190 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.683812 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.719244 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.752959 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.753027 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.753043 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.753065 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.753079 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.764119 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.785337 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.801089 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.821567 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.836022 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.850073 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.855395 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.855434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.855443 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.855460 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.855486 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.870696 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.959310 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.959385 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.959404 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.959430 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:55 crc kubenswrapper[4900]: I1202 13:42:55.959449 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:55Z","lastTransitionTime":"2025-12-02T13:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.062848 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.062920 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.062941 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.062972 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.062993 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.166744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.166823 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.166841 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.166885 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.166905 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.269549 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.269628 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.269676 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.269706 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.269727 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.372436 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.372498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.372515 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.372541 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.372560 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.475148 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.475212 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.475224 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.475244 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.475258 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.577957 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.578006 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.578017 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.578038 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.578051 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.659062 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.659181 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.659214 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.659238 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659295 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:43:00.659252745 +0000 UTC m=+26.075066606 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659365 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.659371 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659396 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659433 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:00.65941509 +0000 UTC m=+26.075228941 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659489 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659536 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659550 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659590 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659518 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:00.659484092 +0000 UTC m=+26.075297943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659611 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659632 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659633 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:00.659621296 +0000 UTC m=+26.075435147 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.659725 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:00.659715338 +0000 UTC m=+26.075529199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.680424 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.680455 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.680469 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.680487 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.680499 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.783316 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.783347 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.783357 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.783371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.783381 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.822688 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5x7v9"] Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.823350 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.828869 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.830264 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.832692 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.848453 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.873764 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.885444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.885507 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.885524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.885550 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.885570 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.891042 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.908514 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.909730 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.909797 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.909755 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.909920 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.910145 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:42:56 crc kubenswrapper[4900]: E1202 13:42:56.910313 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.929036 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.952243 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.962537 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0fc9e986-c2f6-4fac-b61c-de2ef11882c1-hosts-file\") pod \"node-resolver-5x7v9\" (UID: \"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\") " pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.962716 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72d7m\" (UniqueName: \"kubernetes.io/projected/0fc9e986-c2f6-4fac-b61c-de2ef11882c1-kube-api-access-72d7m\") pod \"node-resolver-5x7v9\" (UID: \"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\") " pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.979387 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.988143 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.988173 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.988185 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.988205 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.988220 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:56Z","lastTransitionTime":"2025-12-02T13:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:56 crc kubenswrapper[4900]: I1202 13:42:56.998750 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.021817 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.064102 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72d7m\" (UniqueName: \"kubernetes.io/projected/0fc9e986-c2f6-4fac-b61c-de2ef11882c1-kube-api-access-72d7m\") pod \"node-resolver-5x7v9\" (UID: \"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\") " pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.064167 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0fc9e986-c2f6-4fac-b61c-de2ef11882c1-hosts-file\") pod \"node-resolver-5x7v9\" (UID: \"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\") " pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.064277 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0fc9e986-c2f6-4fac-b61c-de2ef11882c1-hosts-file\") pod \"node-resolver-5x7v9\" (UID: \"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\") " pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.091288 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.091357 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.091375 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.091404 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.091429 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.102326 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.107292 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72d7m\" (UniqueName: \"kubernetes.io/projected/0fc9e986-c2f6-4fac-b61c-de2ef11882c1-kube-api-access-72d7m\") pod \"node-resolver-5x7v9\" (UID: \"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\") " pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.119951 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.139905 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.145233 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5x7v9" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.161080 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc9e986_c2f6_4fac_b61c_de2ef11882c1.slice/crio-4415ffe9b6ea2787ff25716b9552fc90c7d319efab0d965c5c2bee8df3395d72 WatchSource:0}: Error finding container 4415ffe9b6ea2787ff25716b9552fc90c7d319efab0d965c5c2bee8df3395d72: Status 404 returned error can't find the container with id 4415ffe9b6ea2787ff25716b9552fc90c7d319efab0d965c5c2bee8df3395d72 Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.177076 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.198075 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.198113 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.198126 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.198147 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.198162 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.209666 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.239068 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ngwgq"] Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.239438 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.240327 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-r8pv9"] Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.240495 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.258139 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.258493 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ckvw2"] Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259039 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259494 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259596 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259684 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259790 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259857 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259920 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.259874 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.260081 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.262616 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.276693 4900 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.276753 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.276853 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.278405 4900 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.278454 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.278627 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-88rnd"] Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.279961 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.287851 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.287893 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.287852 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.287933 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.292956 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.292996 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.293058 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.293070 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.293122 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.293135 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.293171 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.293185 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.293320 4900 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 02 13:42:57 crc kubenswrapper[4900]: E1202 13:42:57.293375 4900 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.304569 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.312589 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.312635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.312660 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.312677 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.312689 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.347113 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366550 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-systemd\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366589 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqb94\" (UniqueName: \"kubernetes.io/projected/57723040-ba7b-43ac-99c5-234dac2c90ce-kube-api-access-sqb94\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366611 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-rootfs\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366629 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-hostroot\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366658 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wvm\" (UniqueName: \"kubernetes.io/projected/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-kube-api-access-n9wvm\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366685 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-cni-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366700 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-cni-bin\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366714 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-bin\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366731 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487wr\" (UniqueName: \"kubernetes.io/projected/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-kube-api-access-487wr\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366747 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-kubelet\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366761 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-netd\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366776 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-etc-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366791 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-etc-kubernetes\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366806 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-kubelet\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366820 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-cni-multus\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366839 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovn-node-metrics-cert\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366855 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366873 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-system-cni-dir\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366890 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57723040-ba7b-43ac-99c5-234dac2c90ce-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366906 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-proxy-tls\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366925 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-os-release\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366941 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366957 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-system-cni-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366973 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-cnibin\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.366989 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-systemd-units\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367005 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367058 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-ovn-kubernetes\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367110 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367132 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-env-overrides\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367183 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-multus-certs\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367202 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-var-lib-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367222 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367239 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-netns\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367276 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-conf-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367292 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-slash\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367307 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367326 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-cnibin\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367340 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-cni-binary-copy\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367356 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-ovn\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367373 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57723040-ba7b-43ac-99c5-234dac2c90ce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367390 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-log-socket\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367419 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-socket-dir-parent\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367437 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-k8s-cni-cncf-io\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367452 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-netns\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367466 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-node-log\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367485 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4d72\" (UniqueName: \"kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367511 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-os-release\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.367526 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-daemon-config\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.372676 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.402046 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.414844 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.414888 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.414900 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.414917 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.414930 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.421919 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.437284 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.453766 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469058 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-487wr\" (UniqueName: \"kubernetes.io/projected/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-kube-api-access-487wr\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469102 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-kubelet\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469119 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-netd\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469137 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-kubelet\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469156 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-etc-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469173 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-etc-kubernetes\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469191 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-proxy-tls\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469209 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-cni-multus\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469225 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovn-node-metrics-cert\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469244 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469263 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-system-cni-dir\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469279 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57723040-ba7b-43ac-99c5-234dac2c90ce-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469301 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-cnibin\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469321 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-os-release\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469337 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-system-cni-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469368 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-systemd-units\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469386 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469403 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-ovn-kubernetes\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469421 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469438 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-env-overrides\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-multus-certs\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469472 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-var-lib-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469490 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469508 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-slash\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469524 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-netns\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469550 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-conf-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469564 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469581 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-cnibin\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469598 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-cni-binary-copy\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469612 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-ovn\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469628 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57723040-ba7b-43ac-99c5-234dac2c90ce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469661 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-log-socket\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469681 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-netns\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469703 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-socket-dir-parent\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469732 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-k8s-cni-cncf-io\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469750 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-daemon-config\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469765 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-node-log\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469781 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4d72\" (UniqueName: \"kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469803 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-os-release\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469819 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wvm\" (UniqueName: \"kubernetes.io/projected/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-kube-api-access-n9wvm\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469834 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-systemd\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469863 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqb94\" (UniqueName: \"kubernetes.io/projected/57723040-ba7b-43ac-99c5-234dac2c90ce-kube-api-access-sqb94\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469879 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-rootfs\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469894 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-hostroot\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469909 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-bin\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469932 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-cni-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.469947 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-cni-bin\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470026 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-cni-bin\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470349 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-kubelet\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470376 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-netd\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470398 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-kubelet\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470419 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-etc-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470440 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-etc-kubernetes\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470929 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-conf-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470962 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-var-lib-cni-multus\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.470984 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471003 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-ovn-kubernetes\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471024 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471080 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-multus-certs\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471099 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-netns\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471243 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-socket-dir-parent\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471349 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-log-socket\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471432 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-systemd\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471488 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-netns\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471513 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-node-log\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.471915 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-hostroot\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472582 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57723040-ba7b-43ac-99c5-234dac2c90ce-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472635 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-os-release\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472673 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-rootfs\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472699 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-os-release\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472755 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-cnibin\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472785 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-system-cni-dir\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472896 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-cni-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472918 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-cnibin\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.472932 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-bin\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473058 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-ovn\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473168 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-system-cni-dir\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473254 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-host-run-k8s-cni-cncf-io\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473696 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473787 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-systemd-units\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473812 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-var-lib-openvswitch\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473840 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-cni-binary-copy\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.473917 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57723040-ba7b-43ac-99c5-234dac2c90ce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.474006 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-slash\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.474455 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-multus-daemon-config\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.478251 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-proxy-tls\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.486976 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.492337 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqb94\" (UniqueName: \"kubernetes.io/projected/57723040-ba7b-43ac-99c5-234dac2c90ce-kube-api-access-sqb94\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.493088 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wvm\" (UniqueName: \"kubernetes.io/projected/7cacd7d0-a1a1-4ea0-b918-a73c8220e500-kube-api-access-n9wvm\") pod \"multus-r8pv9\" (UID: \"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\") " pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.496031 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-487wr\" (UniqueName: \"kubernetes.io/projected/1c8f7b18-f260-4beb-b4ff-0af7e505c7d1-kube-api-access-487wr\") pod \"machine-config-daemon-ngwgq\" (UID: \"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\") " pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.506000 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.517450 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.517485 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.517494 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.517508 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.517518 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.520401 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.537604 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.553948 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.554748 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.560265 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r8pv9" Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.569556 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8f7b18_f260_4beb_b4ff_0af7e505c7d1.slice/crio-cdbd8403c46600e350471a8f745df7b2ef18cccc1bc6605024dddc6da757582d WatchSource:0}: Error finding container cdbd8403c46600e350471a8f745df7b2ef18cccc1bc6605024dddc6da757582d: Status 404 returned error can't find the container with id cdbd8403c46600e350471a8f745df7b2ef18cccc1bc6605024dddc6da757582d Dec 02 13:42:57 crc kubenswrapper[4900]: W1202 13:42:57.572123 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cacd7d0_a1a1_4ea0_b918_a73c8220e500.slice/crio-c0873e7c68b83ed76f5dda17986f2253987fc92f518fb6b0a50168322de6713e WatchSource:0}: Error finding container c0873e7c68b83ed76f5dda17986f2253987fc92f518fb6b0a50168322de6713e: Status 404 returned error can't find the container with id c0873e7c68b83ed76f5dda17986f2253987fc92f518fb6b0a50168322de6713e Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.572875 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.592806 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.604733 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.615254 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.619925 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.620025 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.620043 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.620074 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.620102 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.634146 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:57Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.722549 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.722581 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.722589 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.722603 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.722612 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.825463 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.825558 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.825574 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.825600 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.825617 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.929137 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.929598 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.929773 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.929908 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:57 crc kubenswrapper[4900]: I1202 13:42:57.930033 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:57Z","lastTransitionTime":"2025-12-02T13:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.033321 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.033394 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.033410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.033439 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.033453 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.110242 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.110756 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.110780 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"cdbd8403c46600e350471a8f745df7b2ef18cccc1bc6605024dddc6da757582d"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.112094 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5x7v9" event={"ID":"0fc9e986-c2f6-4fac-b61c-de2ef11882c1","Type":"ContainerStarted","Data":"17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.112156 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5x7v9" event={"ID":"0fc9e986-c2f6-4fac-b61c-de2ef11882c1","Type":"ContainerStarted","Data":"4415ffe9b6ea2787ff25716b9552fc90c7d319efab0d965c5c2bee8df3395d72"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.117198 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerStarted","Data":"7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.117259 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerStarted","Data":"c0873e7c68b83ed76f5dda17986f2253987fc92f518fb6b0a50168322de6713e"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.128115 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.136523 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.136561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.136599 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.136618 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.136633 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.145752 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.169708 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.187579 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.206198 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.218959 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.235398 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.238939 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.238988 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.239001 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.239021 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.239033 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.248166 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.260584 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.281437 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.293493 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovn-node-metrics-cert\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.301009 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.336007 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.344800 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.344836 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.344845 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.344861 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.344871 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.355508 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.367279 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.378529 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.391392 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.399086 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.406333 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-env-overrides\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.413598 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.420240 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.425018 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.426075 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57723040-ba7b-43ac-99c5-234dac2c90ce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckvw2\" (UID: \"57723040-ba7b-43ac-99c5-234dac2c90ce\") " pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.436946 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.447727 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.447773 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.447788 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.447809 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.447824 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.448378 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.451862 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.461955 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.472552 4900 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.472629 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config podName:338f7f04-2450-4efb-a2e7-3c0e13eb8998 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:58.97260653 +0000 UTC m=+24.388420591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config") pod "ovnkube-node-88rnd" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.472910 4900 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.473044 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib podName:338f7f04-2450-4efb-a2e7-3c0e13eb8998 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:58.973018071 +0000 UTC m=+24.388831942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib") pod "ovnkube-node-88rnd" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.475533 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.489106 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.492957 4900 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.493026 4900 projected.go:194] Error preparing data for projected volume kube-api-access-z4d72 for pod openshift-ovn-kubernetes/ovnkube-node-88rnd: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.493127 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72 podName:338f7f04-2450-4efb-a2e7-3c0e13eb8998 nodeName:}" failed. No retries permitted until 2025-12-02 13:42:58.993097184 +0000 UTC m=+24.408911045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z4d72" (UniqueName: "kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72") pod "ovnkube-node-88rnd" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.500936 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.509951 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.519606 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.527915 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.542779 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:58Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.550601 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.550632 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.550660 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.550679 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.550690 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.560769 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.625623 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.643458 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.649234 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.652990 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.653528 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.653556 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.653576 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.653591 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: W1202 13:42:58.674767 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57723040_ba7b_43ac_99c5_234dac2c90ce.slice/crio-a0c62177825be7dfaa70f8a6ca128613c0e49a8176a4f798d0278586ebcb6dc5 WatchSource:0}: Error finding container a0c62177825be7dfaa70f8a6ca128613c0e49a8176a4f798d0278586ebcb6dc5: Status 404 returned error can't find the container with id a0c62177825be7dfaa70f8a6ca128613c0e49a8176a4f798d0278586ebcb6dc5 Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.679765 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.760296 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.760344 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.760356 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.760376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.760387 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.872265 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.872314 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.872334 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.872357 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.872374 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.909593 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.909683 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.909806 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.909861 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.909938 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:42:58 crc kubenswrapper[4900]: E1202 13:42:58.910071 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.975461 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.975511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.975529 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.975551 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.975567 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:58Z","lastTransitionTime":"2025-12-02T13:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.984546 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.984616 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.986261 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:58 crc kubenswrapper[4900]: I1202 13:42:58.986452 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.079244 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.079283 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.079292 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.079308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.079320 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.085683 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4d72\" (UniqueName: \"kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.094007 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4d72\" (UniqueName: \"kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72\") pod \"ovnkube-node-88rnd\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.109876 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.121934 4900 generic.go:334] "Generic (PLEG): container finished" podID="57723040-ba7b-43ac-99c5-234dac2c90ce" containerID="5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c" exitCode=0 Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.121979 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerDied","Data":"5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.122009 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerStarted","Data":"a0c62177825be7dfaa70f8a6ca128613c0e49a8176a4f798d0278586ebcb6dc5"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.142591 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.161422 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: W1202 13:42:59.164611 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod338f7f04_2450_4efb_a2e7_3c0e13eb8998.slice/crio-04410f51f2d59ec9e3e78ea84ec3a051b020b87a665c3c265986f1f6689d272a WatchSource:0}: Error finding container 04410f51f2d59ec9e3e78ea84ec3a051b020b87a665c3c265986f1f6689d272a: Status 404 returned error can't find the container with id 04410f51f2d59ec9e3e78ea84ec3a051b020b87a665c3c265986f1f6689d272a Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.177533 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.186186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.186256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.186273 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.186296 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.186317 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.202185 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.218572 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.242868 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.262871 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.281986 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.289436 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.289502 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.289520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.289544 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.289562 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.298107 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.316296 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.331025 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.347805 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.365042 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.392225 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.392275 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.392293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.392318 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.392339 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.495304 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.495344 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.495356 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.495374 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.495388 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.597833 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.597904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.597924 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.597952 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.597973 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.700448 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.700481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.700490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.700505 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.700517 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.804603 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.804740 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.804789 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.804824 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.804846 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.908477 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.908546 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.908567 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.908595 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.908616 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:42:59Z","lastTransitionTime":"2025-12-02T13:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.928470 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-p8tll"] Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.929075 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.931733 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.931938 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.932835 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.933121 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.949474 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.969473 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:42:59 crc kubenswrapper[4900]: I1202 13:42:59.986179 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:42:59.999986 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:42:59Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.011338 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.011397 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.011415 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.011442 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.011463 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.018776 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.045139 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.067881 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.096394 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.096531 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e94f1e8-0edf-4550-bf19-da9690ade27d-serviceca\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.096707 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khptv\" (UniqueName: \"kubernetes.io/projected/9e94f1e8-0edf-4550-bf19-da9690ade27d-kube-api-access-khptv\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.096774 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e94f1e8-0edf-4550-bf19-da9690ade27d-host\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.114089 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.115538 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.115579 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.115592 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.115610 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.115625 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.127852 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" exitCode=0 Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.127971 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.128045 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"04410f51f2d59ec9e3e78ea84ec3a051b020b87a665c3c265986f1f6689d272a"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.132175 4900 generic.go:334] "Generic (PLEG): container finished" podID="57723040-ba7b-43ac-99c5-234dac2c90ce" containerID="787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592" exitCode=0 Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.132235 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerDied","Data":"787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.141111 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.167503 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.188862 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.197930 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e94f1e8-0edf-4550-bf19-da9690ade27d-serviceca\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.198005 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khptv\" (UniqueName: \"kubernetes.io/projected/9e94f1e8-0edf-4550-bf19-da9690ade27d-kube-api-access-khptv\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.198061 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e94f1e8-0edf-4550-bf19-da9690ade27d-host\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.198217 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e94f1e8-0edf-4550-bf19-da9690ade27d-host\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.200224 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e94f1e8-0edf-4550-bf19-da9690ade27d-serviceca\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.208127 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.218580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.218609 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.218617 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.218633 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.218659 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.226594 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khptv\" (UniqueName: \"kubernetes.io/projected/9e94f1e8-0edf-4550-bf19-da9690ade27d-kube-api-access-khptv\") pod \"node-ca-p8tll\" (UID: \"9e94f1e8-0edf-4550-bf19-da9690ade27d\") " pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.228245 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.242841 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.249728 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p8tll" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.257327 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.278509 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: W1202 13:43:00.284593 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e94f1e8_0edf_4550_bf19_da9690ade27d.slice/crio-0b8b1a50a1e024c06a6a24209155b33e97fd9b1f74eb2bab9af85ed7ef244e49 WatchSource:0}: Error finding container 0b8b1a50a1e024c06a6a24209155b33e97fd9b1f74eb2bab9af85ed7ef244e49: Status 404 returned error can't find the container with id 0b8b1a50a1e024c06a6a24209155b33e97fd9b1f74eb2bab9af85ed7ef244e49 Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.295657 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.305917 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.319103 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.323547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.323604 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.323618 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.323669 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.323686 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.333573 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.350827 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.365298 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.399694 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.409295 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.433230 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.433271 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.433280 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.433297 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.433309 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.459051 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.481424 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.505339 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.535794 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.535837 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.535845 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.535859 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.535871 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.639165 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.639208 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.639218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.639236 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.639247 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.702295 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.702509 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702569 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:43:08.702524055 +0000 UTC m=+34.118337936 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.702637 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.702738 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.702793 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702740 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702929 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702798 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702997 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702880 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.703162 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.703191 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.702953 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.703085 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:08.70306862 +0000 UTC m=+34.118882511 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.703299 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:08.703275276 +0000 UTC m=+34.119089127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.703373 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:08.703308427 +0000 UTC m=+34.119122438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.703421 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:08.70340471 +0000 UTC m=+34.119218601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.742275 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.742339 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.742358 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.742384 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.742403 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.845255 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.845300 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.845310 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.845329 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.845342 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.909593 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.909680 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.909790 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.909946 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.910085 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:00 crc kubenswrapper[4900]: E1202 13:43:00.910307 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.948329 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.948377 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.948387 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.948406 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:00 crc kubenswrapper[4900]: I1202 13:43:00.948419 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:00Z","lastTransitionTime":"2025-12-02T13:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.051751 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.051836 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.051862 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.051893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.051912 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.143297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.143358 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.143374 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.143386 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.143398 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.143409 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.146740 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p8tll" event={"ID":"9e94f1e8-0edf-4550-bf19-da9690ade27d","Type":"ContainerStarted","Data":"a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.146827 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p8tll" event={"ID":"9e94f1e8-0edf-4550-bf19-da9690ade27d","Type":"ContainerStarted","Data":"0b8b1a50a1e024c06a6a24209155b33e97fd9b1f74eb2bab9af85ed7ef244e49"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.153152 4900 generic.go:334] "Generic (PLEG): container finished" podID="57723040-ba7b-43ac-99c5-234dac2c90ce" containerID="64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3" exitCode=0 Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.153207 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerDied","Data":"64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.154584 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.154615 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.154631 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.154682 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.154701 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.177318 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.199495 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.223295 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.238836 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.254704 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.258052 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.258092 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.258108 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.258129 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.258145 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.275907 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.292333 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.318798 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.333993 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.350771 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.361714 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.361774 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.361786 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.361803 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.361831 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.366403 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.385120 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.406380 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.428529 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.449403 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.465171 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.466353 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.466393 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.466404 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.466421 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.466434 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.483348 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.500316 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.515113 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.530181 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.545265 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.559806 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.568923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.568983 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.569004 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.569031 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.569051 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.577343 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.595425 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.615864 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.635330 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.664005 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.672389 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.672455 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.672473 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.672505 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.672524 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.681407 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.776128 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.776207 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.776227 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.776263 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.776284 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.883489 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.883560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.883580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.883612 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.883633 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.989731 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.989803 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.989824 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.989854 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:01 crc kubenswrapper[4900]: I1202 13:43:01.989879 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:01Z","lastTransitionTime":"2025-12-02T13:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.093387 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.093480 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.093493 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.093511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.093524 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.162063 4900 generic.go:334] "Generic (PLEG): container finished" podID="57723040-ba7b-43ac-99c5-234dac2c90ce" containerID="a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8" exitCode=0 Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.163269 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerDied","Data":"a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.187678 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.199248 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.199297 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.199309 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.199329 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.199344 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.220552 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.247192 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.263591 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.282512 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.301293 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.303640 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.303721 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.303768 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.303792 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.303810 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.321115 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.337886 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.354366 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.381730 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.403079 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.407463 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.407506 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.407519 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.407540 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.407554 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.422561 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.437915 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.454287 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.511205 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.511267 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.511286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.511712 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.511736 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.614876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.614936 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.614956 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.614992 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.615011 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.720059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.720129 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.720147 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.720173 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.720209 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.823088 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.823133 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.823142 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.823157 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.823170 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.910044 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.910043 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.910083 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:02 crc kubenswrapper[4900]: E1202 13:43:02.910268 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:02 crc kubenswrapper[4900]: E1202 13:43:02.910352 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:02 crc kubenswrapper[4900]: E1202 13:43:02.910439 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.925788 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.925848 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.925865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.925887 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:02 crc kubenswrapper[4900]: I1202 13:43:02.925906 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:02Z","lastTransitionTime":"2025-12-02T13:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.028933 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.029007 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.029030 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.029065 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.029087 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.132902 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.132969 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.132986 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.133015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.133036 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.173122 4900 generic.go:334] "Generic (PLEG): container finished" podID="57723040-ba7b-43ac-99c5-234dac2c90ce" containerID="dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf" exitCode=0 Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.173205 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerDied","Data":"dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.198857 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.210728 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.234225 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.239943 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.240009 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.240030 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.240059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.240080 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.263515 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.280982 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.297208 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.313104 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.332092 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.342351 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.342410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.342426 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.342445 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.342460 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.362178 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.379563 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.400812 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.417048 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.443305 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.445341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.445371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.445380 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.445398 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.445436 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.461748 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.472471 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.548833 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.548898 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.548911 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.548934 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.548954 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.652169 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.652911 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.652992 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.653012 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.653121 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.654495 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.654567 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.654584 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.654838 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.654863 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: E1202 13:43:03.676719 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.682161 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.682229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.682255 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.682287 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.682309 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: E1202 13:43:03.702419 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.707858 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.707911 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.707928 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.707948 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.707964 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: E1202 13:43:03.732596 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.737509 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.737559 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.737571 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.737590 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.737605 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: E1202 13:43:03.762003 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.768002 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.768060 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.768077 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.768099 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.768116 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: E1202 13:43:03.788824 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:03Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:03 crc kubenswrapper[4900]: E1202 13:43:03.788997 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.791193 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.791234 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.791248 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.791270 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.791285 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.893500 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.893562 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.893573 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.893588 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.893598 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.997407 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.997478 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.997496 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.997525 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:03 crc kubenswrapper[4900]: I1202 13:43:03.997544 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:03Z","lastTransitionTime":"2025-12-02T13:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.100847 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.100914 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.100933 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.100958 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.100980 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.203542 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.203627 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.203700 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.203735 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.203760 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.210611 4900 generic.go:334] "Generic (PLEG): container finished" podID="57723040-ba7b-43ac-99c5-234dac2c90ce" containerID="25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a" exitCode=0 Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.210714 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerDied","Data":"25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.248454 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.264144 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.283604 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.301072 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.307385 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.307449 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.307477 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.307511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.307537 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.321069 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.343073 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.366451 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.381989 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.400332 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.410990 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.411072 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.411095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.411137 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.411158 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.418163 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.431055 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.445562 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.463707 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.483894 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.514376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.514434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.514452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.514478 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.514499 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.619289 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.619355 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.619374 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.619432 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.619703 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.725041 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.725107 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.725129 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.725155 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.725176 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.829414 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.829487 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.829550 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.829587 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.829615 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.910584 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.910729 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.910590 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:04 crc kubenswrapper[4900]: E1202 13:43:04.916590 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:04 crc kubenswrapper[4900]: E1202 13:43:04.916986 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:04 crc kubenswrapper[4900]: E1202 13:43:04.917205 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.933898 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.933979 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.934006 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.934036 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.934059 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:04Z","lastTransitionTime":"2025-12-02T13:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.945500 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.966344 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:04 crc kubenswrapper[4900]: I1202 13:43:04.991790 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.016562 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.039073 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.039120 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.039140 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.039169 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.039186 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.043184 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.066332 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.088530 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.105001 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.163922 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.163961 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.163971 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.163990 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.164002 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.165204 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.188392 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.207693 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.220408 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" event={"ID":"57723040-ba7b-43ac-99c5-234dac2c90ce","Type":"ContainerStarted","Data":"f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.228496 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.257166 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.266528 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.266611 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.266639 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.266709 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.266736 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.272800 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.292125 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.309968 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.333553 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.351123 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.370460 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.371842 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.371960 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.372041 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.372228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.372398 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.411020 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.467528 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.474958 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.475004 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.475015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.475031 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.475042 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.479158 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.493305 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.504818 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.519251 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.537235 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.551422 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.563000 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.577705 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.577753 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.577765 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.577781 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.577793 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.680963 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.681031 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.681053 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.681111 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.681131 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.703242 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.725036 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.747577 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.764628 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.780940 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.784253 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.784288 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.784299 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.784316 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.784331 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.801566 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.817488 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.832210 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.855201 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.879104 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.886553 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.886596 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.886607 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.886627 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.886669 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.899680 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.918777 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.940226 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.961552 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.989547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.989800 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.989890 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.989997 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.990084 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:05Z","lastTransitionTime":"2025-12-02T13:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:05 crc kubenswrapper[4900]: I1202 13:43:05.993161 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.094070 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.094147 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.094171 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.094207 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.094232 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.197222 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.197758 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.197787 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.197821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.197845 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.231568 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.233480 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.254596 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.265798 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.273432 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.289773 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.301128 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.301183 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.301202 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.301226 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.301244 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.306414 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.326194 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.346921 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.367492 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.396296 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.404771 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.404832 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.404858 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.404889 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.404913 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.410177 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.431465 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.450706 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.470908 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.490468 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.507508 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.507567 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.507595 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.507626 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.507682 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.510540 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.536013 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.555699 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.577011 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.600835 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.610893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.610950 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.610968 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.610994 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.611011 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.620958 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.637449 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.657196 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.677125 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.694845 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.713506 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.714939 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.715013 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.715037 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.715075 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.715119 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.734021 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.756998 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.798392 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.819280 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.819372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.819399 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.819434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.819458 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.819806 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:06Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.909742 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.909771 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.909744 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:06 crc kubenswrapper[4900]: E1202 13:43:06.909911 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:06 crc kubenswrapper[4900]: E1202 13:43:06.910276 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:06 crc kubenswrapper[4900]: E1202 13:43:06.910356 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.922979 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.923025 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.923040 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.923061 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:06 crc kubenswrapper[4900]: I1202 13:43:06.923074 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:06Z","lastTransitionTime":"2025-12-02T13:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.025421 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.025720 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.025798 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.025919 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.026012 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.128490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.128553 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.128572 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.128600 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.128618 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.231754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.231831 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.231853 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.231884 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.231908 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.235069 4900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.236212 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.269848 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.288019 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.302276 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.315985 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.331214 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.335861 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.335918 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.335938 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.335968 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.335987 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.350963 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.428903 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.439063 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.439102 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.439115 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.439135 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.439147 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.444223 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.469742 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.480203 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.495809 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.507476 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.518301 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.532116 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.542444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.542498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.542509 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.542527 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.542540 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.547128 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:07Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.645959 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.646034 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.646053 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.646079 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.646098 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.749809 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.749875 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.749893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.749919 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.749939 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.853714 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.853776 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.853794 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.853819 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.853838 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.957678 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.957749 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.957769 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.957796 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:07 crc kubenswrapper[4900]: I1202 13:43:07.957817 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:07Z","lastTransitionTime":"2025-12-02T13:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.061805 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.061876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.061895 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.061921 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.061941 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.165261 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.165318 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.165334 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.165359 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.165378 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.242852 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/0.log" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.247406 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f" exitCode=1 Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.247534 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.248612 4900 scope.go:117] "RemoveContainer" containerID="263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.269079 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.269145 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.269163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.269189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.269208 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.270481 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.293338 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.320618 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:08Z\\\",\\\"message\\\":\\\"kplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:07.623331 6236 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:07.623774 6236 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:43:07.623831 6236 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:43:07.623837 6236 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:43:07.623867 6236 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:07.623872 6236 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:07.623894 6236 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:43:07.623898 6236 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:43:07.623922 6236 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:43:07.623958 6236 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:43:07.623954 6236 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:07.623976 6236 factory.go:656] Stopping watch factory\\\\nI1202 13:43:07.623992 6236 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.334941 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.360497 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.372732 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.372826 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.372852 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.372883 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.372905 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.380823 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.406793 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.437327 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.460830 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.476343 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.476428 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.476447 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.476520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.476542 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.481546 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.502761 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.518945 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.537434 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.560755 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:08Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.580104 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.580165 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.580184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.580223 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.580246 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.683337 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.683412 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.683437 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.683480 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.683502 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.740742 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.740868 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.740922 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.740956 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.740987 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741042 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:43:24.74099561 +0000 UTC m=+50.156809501 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741166 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741168 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741199 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741198 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741296 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:24.741264177 +0000 UTC m=+50.157078068 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741307 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741331 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:24.741316519 +0000 UTC m=+50.157130400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741166 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741387 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:24.74135978 +0000 UTC m=+50.157173801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741425 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741452 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.741536 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:24.741514555 +0000 UTC m=+50.157328436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.786893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.786969 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.786987 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.787014 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.787033 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.890237 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.890308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.890327 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.890355 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.890373 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.909678 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.909696 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.910308 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.910373 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.910551 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:08 crc kubenswrapper[4900]: E1202 13:43:08.910727 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.992914 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.992962 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.992974 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.992993 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:08 crc kubenswrapper[4900]: I1202 13:43:08.993006 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:08Z","lastTransitionTime":"2025-12-02T13:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.096211 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.096254 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.096268 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.096286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.096298 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.198726 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.198775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.198790 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.198808 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.198821 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.256236 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/0.log" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.260462 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.260676 4900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.284076 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.303020 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.303527 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.303553 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.303584 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.303608 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.309095 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.333171 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.349155 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.365944 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.384494 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.402198 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.406830 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.406876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.406893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.406914 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.406933 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.426485 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:08Z\\\",\\\"message\\\":\\\"kplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:07.623331 6236 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:07.623774 6236 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:43:07.623831 6236 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:43:07.623837 6236 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:43:07.623867 6236 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:07.623872 6236 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:07.623894 6236 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:43:07.623898 6236 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:43:07.623922 6236 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:43:07.623958 6236 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:43:07.623954 6236 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:07.623976 6236 factory.go:656] Stopping watch factory\\\\nI1202 13:43:07.623992 6236 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.439072 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.456799 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.471601 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.488579 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.503311 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.509631 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.509689 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.509706 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.509728 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.509744 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.517543 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:09Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.612197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.612245 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.612262 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.612287 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.612306 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.714757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.714830 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.714849 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.714882 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.714903 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.818235 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.818317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.818341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.818371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.818389 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.921876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.921957 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.921984 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.922019 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:09 crc kubenswrapper[4900]: I1202 13:43:09.922044 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:09Z","lastTransitionTime":"2025-12-02T13:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.024615 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.024718 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.024736 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.024764 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.024782 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.129168 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.129270 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.129318 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.129353 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.129376 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.233811 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.233883 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.233903 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.233932 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.233952 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.267763 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/1.log" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.269463 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/0.log" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.274084 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110" exitCode=1 Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.274143 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.274217 4900 scope.go:117] "RemoveContainer" containerID="263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.275448 4900 scope.go:117] "RemoveContainer" containerID="d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110" Dec 02 13:43:10 crc kubenswrapper[4900]: E1202 13:43:10.275793 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.296793 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.317308 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.337539 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.338372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.338419 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.338435 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.338460 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.338477 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.355872 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.373159 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.394834 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.420050 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.441935 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.441989 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.442007 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.442031 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.442049 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.453886 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:08Z\\\",\\\"message\\\":\\\"kplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:07.623331 6236 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:07.623774 6236 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:43:07.623831 6236 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:43:07.623837 6236 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:43:07.623867 6236 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:07.623872 6236 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:07.623894 6236 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:43:07.623898 6236 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:43:07.623922 6236 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:43:07.623958 6236 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:43:07.623954 6236 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:07.623976 6236 factory.go:656] Stopping watch factory\\\\nI1202 13:43:07.623992 6236 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.471222 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.496185 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.516471 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.527092 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck"] Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.528272 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.530897 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.531212 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.546370 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.546420 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.546443 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.546471 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.546492 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.550172 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.562192 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da08de31-accc-4b2b-aac7-20e947009eb4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.562362 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da08de31-accc-4b2b-aac7-20e947009eb4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.562411 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da08de31-accc-4b2b-aac7-20e947009eb4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.562466 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxxw\" (UniqueName: \"kubernetes.io/projected/da08de31-accc-4b2b-aac7-20e947009eb4-kube-api-access-6dxxw\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.574388 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.595399 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.614236 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.636003 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.650497 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.650551 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.650562 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.650578 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.650589 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.657459 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.663711 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da08de31-accc-4b2b-aac7-20e947009eb4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.663781 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxxw\" (UniqueName: \"kubernetes.io/projected/da08de31-accc-4b2b-aac7-20e947009eb4-kube-api-access-6dxxw\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.663843 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da08de31-accc-4b2b-aac7-20e947009eb4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.663930 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da08de31-accc-4b2b-aac7-20e947009eb4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.665161 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/da08de31-accc-4b2b-aac7-20e947009eb4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.665743 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da08de31-accc-4b2b-aac7-20e947009eb4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.673318 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/da08de31-accc-4b2b-aac7-20e947009eb4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.679515 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.690135 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxxw\" (UniqueName: \"kubernetes.io/projected/da08de31-accc-4b2b-aac7-20e947009eb4-kube-api-access-6dxxw\") pod \"ovnkube-control-plane-749d76644c-rsnck\" (UID: \"da08de31-accc-4b2b-aac7-20e947009eb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.699436 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.718586 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.745497 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.760905 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.761471 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.761504 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.761532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.761552 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.775508 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.795453 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.829363 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263ef745639d8e9974e64fa20bcddad52183f241e7c199ccce1aacb47eac714f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:08Z\\\",\\\"message\\\":\\\"kplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:07.623331 6236 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:07.623774 6236 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:43:07.623831 6236 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:43:07.623837 6236 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:43:07.623867 6236 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:07.623872 6236 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:07.623894 6236 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:43:07.623898 6236 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:43:07.623922 6236 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:43:07.623958 6236 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:43:07.623954 6236 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:07.623976 6236 factory.go:656] Stopping watch factory\\\\nI1202 13:43:07.623992 6236 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.846987 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.851080 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.865450 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.865490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.865506 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.865533 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.865552 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.871092 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: W1202 13:43:10.873475 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda08de31_accc_4b2b_aac7_20e947009eb4.slice/crio-70826aa3704ae6c058f39ee46419943a89d21c8752c1f8f8c024da83bb3a8806 WatchSource:0}: Error finding container 70826aa3704ae6c058f39ee46419943a89d21c8752c1f8f8c024da83bb3a8806: Status 404 returned error can't find the container with id 70826aa3704ae6c058f39ee46419943a89d21c8752c1f8f8c024da83bb3a8806 Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.893201 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.909263 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.909353 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.909284 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:10 crc kubenswrapper[4900]: E1202 13:43:10.909516 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:10 crc kubenswrapper[4900]: E1202 13:43:10.909415 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:10 crc kubenswrapper[4900]: E1202 13:43:10.909752 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.916177 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.936391 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:10Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.970056 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.970142 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.970165 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.970199 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:10 crc kubenswrapper[4900]: I1202 13:43:10.970222 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:10Z","lastTransitionTime":"2025-12-02T13:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.072920 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.073007 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.073033 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.073067 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.073096 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.178109 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.178183 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.178201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.178229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.178249 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.280451 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.280946 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.280970 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.281000 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.281023 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.282159 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/1.log" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.287806 4900 scope.go:117] "RemoveContainer" containerID="d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110" Dec 02 13:43:11 crc kubenswrapper[4900]: E1202 13:43:11.288056 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.290433 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" event={"ID":"da08de31-accc-4b2b-aac7-20e947009eb4","Type":"ContainerStarted","Data":"5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.290487 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" event={"ID":"da08de31-accc-4b2b-aac7-20e947009eb4","Type":"ContainerStarted","Data":"70826aa3704ae6c058f39ee46419943a89d21c8752c1f8f8c024da83bb3a8806"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.309414 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.325901 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.346269 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.358296 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.371521 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.384243 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.384289 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.384307 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.384332 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.384350 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.393278 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.413554 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.446725 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.463355 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.487438 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.487481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.487494 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.487511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.487524 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.507590 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.542791 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.586262 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.593350 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.593377 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.593395 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.593410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.593420 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.603448 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.617554 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.631308 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.685783 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kzhwn"] Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.686251 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:11 crc kubenswrapper[4900]: E1202 13:43:11.686309 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.704707 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.704767 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.704779 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.704801 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.704818 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.708579 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.724411 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.739036 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.760082 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.779169 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.779236 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jm8c\" (UniqueName: \"kubernetes.io/projected/1c63b5f6-db87-48a2-b87e-5442db707843-kube-api-access-6jm8c\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.783933 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.806452 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.808837 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.808900 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.808925 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.808957 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.808984 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.824271 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.843425 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.868585 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.880359 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.880434 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jm8c\" (UniqueName: \"kubernetes.io/projected/1c63b5f6-db87-48a2-b87e-5442db707843-kube-api-access-6jm8c\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:11 crc kubenswrapper[4900]: E1202 13:43:11.880598 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:11 crc kubenswrapper[4900]: E1202 13:43:11.880772 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:12.380734002 +0000 UTC m=+37.796547883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.892598 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.913868 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jm8c\" (UniqueName: \"kubernetes.io/projected/1c63b5f6-db87-48a2-b87e-5442db707843-kube-api-access-6jm8c\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.914562 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.914606 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.914624 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.914676 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.914696 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:11Z","lastTransitionTime":"2025-12-02T13:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.929866 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.949807 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.973877 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:11 crc kubenswrapper[4900]: I1202 13:43:11.995883 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:11Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.018290 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.018446 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.018499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.018517 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.018548 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.018571 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.037419 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.122200 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.122266 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.122284 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.122337 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.122358 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.225228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.225285 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.225311 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.225346 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.225368 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.297033 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" event={"ID":"da08de31-accc-4b2b-aac7-20e947009eb4","Type":"ContainerStarted","Data":"778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.321940 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.328507 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.328630 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.328692 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.328731 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.328756 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.341799 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.364785 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.386366 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.386715 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:12 crc kubenswrapper[4900]: E1202 13:43:12.387259 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:12 crc kubenswrapper[4900]: E1202 13:43:12.387576 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:13.387531853 +0000 UTC m=+38.803345904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.405093 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.427197 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.431971 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.432059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.432085 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.432119 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.432146 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.447966 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.465421 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.484889 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.508269 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.525344 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.538566 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.538774 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.538822 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.538865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.538893 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.551025 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.564533 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.582522 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.603926 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.634196 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.642066 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.642131 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.642141 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.642157 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.642168 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.745687 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.745757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.745775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.745801 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.745824 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.848947 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.848993 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.849002 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.849021 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.849034 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.909550 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.909610 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.909699 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:12 crc kubenswrapper[4900]: E1202 13:43:12.909743 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:12 crc kubenswrapper[4900]: E1202 13:43:12.909855 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:12 crc kubenswrapper[4900]: E1202 13:43:12.910039 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.952192 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.952249 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.952267 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.952293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.952311 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:12Z","lastTransitionTime":"2025-12-02T13:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.966624 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:12 crc kubenswrapper[4900]: I1202 13:43:12.967454 4900 scope.go:117] "RemoveContainer" containerID="d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110" Dec 02 13:43:12 crc kubenswrapper[4900]: E1202 13:43:12.967616 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.055780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.055835 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.055848 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.055868 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.055880 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.159446 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.159503 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.159520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.159545 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.159565 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.263020 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.263082 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.263099 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.263123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.263140 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.366535 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.366611 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.366632 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.366691 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.366718 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.399803 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.400046 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.400842 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:15.400792074 +0000 UTC m=+40.816605955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.470235 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.470308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.470328 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.470355 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.470375 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.573733 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.573823 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.573842 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.573875 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.573895 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.677015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.677080 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.677093 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.677115 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.677449 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.781121 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.781167 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.781181 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.781199 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.781209 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.859964 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.860048 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.860075 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.860113 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.860138 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.882004 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:13Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.887471 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.887528 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.887547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.887569 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.887589 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.909020 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.908960 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:13Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.909245 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.915133 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.915196 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.915214 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.915237 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.915254 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.936122 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:13Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.941376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.941470 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.941495 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.942072 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.942100 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:13 crc kubenswrapper[4900]: E1202 13:43:13.973723 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:13Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.978973 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.979019 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.979038 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.979066 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:13 crc kubenswrapper[4900]: I1202 13:43:13.979086 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:13Z","lastTransitionTime":"2025-12-02T13:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: E1202 13:43:13.999784 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:13Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:14 crc kubenswrapper[4900]: E1202 13:43:14.000004 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.002075 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.002126 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.002145 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.002169 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.002189 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.105164 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.105232 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.105252 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.105283 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.105302 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.209462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.209514 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.209567 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.209595 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.209614 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.312558 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.312617 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.312635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.312689 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.312741 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.416032 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.416097 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.416113 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.416138 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.416157 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.519333 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.519392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.519409 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.519434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.519451 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.623290 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.623362 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.623381 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.623408 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.623435 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.726396 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.726470 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.726488 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.726516 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.726563 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.830499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.830570 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.830590 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.830619 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.830640 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.909828 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.909927 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.909828 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:14 crc kubenswrapper[4900]: E1202 13:43:14.910322 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:14 crc kubenswrapper[4900]: E1202 13:43:14.910401 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:14 crc kubenswrapper[4900]: E1202 13:43:14.910594 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.932142 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.934619 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.934714 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.934735 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.934758 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.934782 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:14Z","lastTransitionTime":"2025-12-02T13:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.954201 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.971009 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:14 crc kubenswrapper[4900]: I1202 13:43:14.988423 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.014230 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.035563 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.037096 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.037143 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.037161 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.037188 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.037207 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.056027 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.086695 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.105322 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.129702 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.140961 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.141056 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.141084 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.141118 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.141172 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.157730 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.179005 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.198261 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.218125 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.237722 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.246541 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.246606 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.246623 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.246670 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.246693 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.254794 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.350086 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.350145 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.350163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.350189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.350210 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.423491 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:15 crc kubenswrapper[4900]: E1202 13:43:15.423671 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:15 crc kubenswrapper[4900]: E1202 13:43:15.423733 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:19.423717016 +0000 UTC m=+44.839530867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.453886 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.453942 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.453959 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.453983 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.454001 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.557242 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.557775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.557945 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.558108 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.558241 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.661065 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.661315 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.661469 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.661614 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.661807 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.765387 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.765754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.765914 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.766066 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.766212 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.869332 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.869386 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.869404 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.869426 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.869442 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.909358 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:15 crc kubenswrapper[4900]: E1202 13:43:15.909557 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.972259 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.972338 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.972362 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.972392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:15 crc kubenswrapper[4900]: I1202 13:43:15.972421 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:15Z","lastTransitionTime":"2025-12-02T13:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.076701 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.076774 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.076792 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.076831 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.076854 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.179410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.179479 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.179502 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.179534 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.179563 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.283601 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.283722 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.283754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.283780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.283835 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.387265 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.387340 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.387357 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.387381 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.387400 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.490876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.490933 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.490970 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.491015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.491033 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.597846 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.597924 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.597963 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.597997 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.598022 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.701471 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.701535 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.701558 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.701586 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.701608 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.804859 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.804920 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.804937 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.804960 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.804977 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.908481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.908532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.908549 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.908574 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.908591 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:16Z","lastTransitionTime":"2025-12-02T13:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.909086 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.909149 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:16 crc kubenswrapper[4900]: I1202 13:43:16.909094 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:16 crc kubenswrapper[4900]: E1202 13:43:16.909261 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:16 crc kubenswrapper[4900]: E1202 13:43:16.909407 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:16 crc kubenswrapper[4900]: E1202 13:43:16.909727 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.011256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.011313 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.011330 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.011354 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.011374 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.114100 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.114162 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.114180 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.114218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.114238 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.216749 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.216915 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.216940 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.216973 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.216999 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.320200 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.320297 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.320342 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.320375 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.320400 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.424313 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.424366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.424385 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.424411 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.424434 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.528096 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.528157 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.528179 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.528209 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.528230 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.631766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.631841 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.631862 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.631973 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.632001 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.735635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.735736 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.735754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.735779 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.735800 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.839489 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.839545 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.839610 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.839699 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.839727 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.909087 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:17 crc kubenswrapper[4900]: E1202 13:43:17.909680 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.943296 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.943560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.943766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.944183 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:17 crc kubenswrapper[4900]: I1202 13:43:17.944419 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:17Z","lastTransitionTime":"2025-12-02T13:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.048771 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.049398 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.049564 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.049744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.049919 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.153490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.153562 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.153580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.153618 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.153636 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.256728 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.256807 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.256832 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.256867 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.256893 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.366700 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.366791 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.366818 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.366852 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.366886 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.469748 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.469807 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.469825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.469853 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.469874 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.573751 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.573819 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.573838 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.573861 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.573880 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.677077 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.677152 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.677176 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.677206 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.677230 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.780152 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.780240 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.780265 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.780299 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.780324 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.884621 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.884712 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.884730 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.884755 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.884773 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.911991 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:18 crc kubenswrapper[4900]: E1202 13:43:18.912233 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.912500 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:18 crc kubenswrapper[4900]: E1202 13:43:18.912603 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.913706 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:18 crc kubenswrapper[4900]: E1202 13:43:18.913807 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.988437 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.988496 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.988509 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.988530 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:18 crc kubenswrapper[4900]: I1202 13:43:18.988544 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:18Z","lastTransitionTime":"2025-12-02T13:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.091825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.091937 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.091988 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.092013 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.092031 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.195022 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.195079 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.195095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.195116 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.195132 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.299027 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.299082 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.299094 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.299117 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.299132 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.402807 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.402877 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.402897 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.402921 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.402943 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.474067 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:19 crc kubenswrapper[4900]: E1202 13:43:19.474358 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:19 crc kubenswrapper[4900]: E1202 13:43:19.474469 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:27.474438209 +0000 UTC m=+52.890252100 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.506203 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.506265 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.506286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.506331 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.506364 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.609829 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.609911 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.609940 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.609974 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.609997 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.712543 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.712594 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.712607 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.712626 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.712639 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.815318 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.815383 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.815396 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.815417 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.815432 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.909780 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:19 crc kubenswrapper[4900]: E1202 13:43:19.909956 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.918118 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.918184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.918208 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.918245 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:19 crc kubenswrapper[4900]: I1202 13:43:19.918275 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:19Z","lastTransitionTime":"2025-12-02T13:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.020522 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.020579 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.020589 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.020606 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.020616 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.124356 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.124432 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.124451 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.124530 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.124551 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.227437 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.227486 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.227498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.227516 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.227529 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.339468 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.339538 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.339559 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.339619 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.339674 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.445490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.445576 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.445601 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.445634 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.445693 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.549223 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.549294 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.549313 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.549338 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.549355 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.652200 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.652253 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.652272 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.652295 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.652312 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.756552 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.756635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.756688 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.756723 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.756747 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.860763 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.860831 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.860848 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.860877 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.860897 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.909672 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.909709 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:20 crc kubenswrapper[4900]: E1202 13:43:20.909900 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.909714 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:20 crc kubenswrapper[4900]: E1202 13:43:20.910149 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:20 crc kubenswrapper[4900]: E1202 13:43:20.910240 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.964095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.964166 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.964184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.964212 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:20 crc kubenswrapper[4900]: I1202 13:43:20.964232 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:20Z","lastTransitionTime":"2025-12-02T13:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.066948 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.067052 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.067078 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.067115 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.067144 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.170475 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.170555 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.170579 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.170605 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.170622 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.274682 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.274754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.274776 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.274810 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.274837 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.378547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.378625 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.378676 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.378708 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.378732 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.483190 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.483924 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.483945 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.483972 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.483992 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.587542 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.587623 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.587687 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.587716 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.587742 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.691437 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.691526 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.691558 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.691590 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.691616 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.794682 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.794746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.794763 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.794800 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.794837 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.898596 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.898731 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.898763 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.898794 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.898815 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:21Z","lastTransitionTime":"2025-12-02T13:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:21 crc kubenswrapper[4900]: I1202 13:43:21.909572 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:21 crc kubenswrapper[4900]: E1202 13:43:21.909859 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.003007 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.003067 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.003083 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.003111 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.003128 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.116583 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.116716 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.116749 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.118059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.118090 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.221323 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.221405 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.221425 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.221453 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.221473 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.325098 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.325166 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.325184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.325214 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.325235 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.429145 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.429246 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.429272 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.429304 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.429332 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.532470 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.532524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.532541 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.532566 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.532581 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.636520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.636613 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.636633 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.636692 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.636719 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.739413 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.739473 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.739484 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.739499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.739510 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.843574 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.843609 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.843621 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.843637 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.843666 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.909636 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.909802 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:22 crc kubenswrapper[4900]: E1202 13:43:22.909910 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:22 crc kubenswrapper[4900]: E1202 13:43:22.910078 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.910249 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:22 crc kubenswrapper[4900]: E1202 13:43:22.910547 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.947181 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.947244 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.947267 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.947297 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:22 crc kubenswrapper[4900]: I1202 13:43:22.947321 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:22Z","lastTransitionTime":"2025-12-02T13:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.051313 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.051384 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.051403 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.051434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.051454 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.154881 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.154954 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.154976 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.155004 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.155024 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.258338 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.258419 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.258436 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.258464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.258484 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.361840 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.361910 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.361929 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.361953 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.361972 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.465715 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.465778 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.465797 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.465825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.465848 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.569765 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.569858 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.569886 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.569926 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.569950 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.701672 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.701753 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.701773 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.701805 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.701826 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.806162 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.806228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.806245 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.806276 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.806294 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.909755 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:23 crc kubenswrapper[4900]: E1202 13:43:23.910151 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.910377 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.910440 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.910462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.910487 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:23 crc kubenswrapper[4900]: I1202 13:43:23.910504 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:23Z","lastTransitionTime":"2025-12-02T13:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.014026 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.014090 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.014108 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.014133 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.014153 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.118812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.118922 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.118947 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.118983 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.119020 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.223465 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.223539 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.223560 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.223587 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.223607 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.279871 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.279941 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.279960 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.279989 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.280012 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.302340 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.308367 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.308432 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.308452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.308478 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.308499 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.329801 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.336285 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.336381 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.336410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.336445 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.336469 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.360043 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.367348 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.367410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.367432 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.367461 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.367482 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.389529 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.394155 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.394241 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.394275 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.394311 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.394338 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.416429 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.416702 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.419103 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.419157 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.419181 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.419211 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.419233 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.523069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.523135 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.523153 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.523181 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.523201 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.626414 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.626470 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.626525 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.626552 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.626570 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.730162 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.730236 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.730258 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.730289 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.730312 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.747784 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.747981 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.748061 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.748099 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.748153 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748299 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748314 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748365 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748377 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748387 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748410 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:56.748375893 +0000 UTC m=+82.164189784 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748447 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:43:56.748427154 +0000 UTC m=+82.164241045 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748476 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:56.748464075 +0000 UTC m=+82.164277966 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748496 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:56.748486446 +0000 UTC m=+82.164300337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748531 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748566 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748591 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.748713 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:56.748684461 +0000 UTC m=+82.164498342 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.833801 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.833910 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.833932 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.833964 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.833986 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.909416 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.910752 4900 scope.go:117] "RemoveContainer" containerID="d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.909527 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.909668 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.911062 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.911241 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:24 crc kubenswrapper[4900]: E1202 13:43:24.911385 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.937366 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.938069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.938147 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.938169 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.938199 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.938221 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:24Z","lastTransitionTime":"2025-12-02T13:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.960686 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:24 crc kubenswrapper[4900]: I1202 13:43:24.993547 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.021003 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.042355 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.042433 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.042446 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.042467 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.042481 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.043865 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.062620 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.083962 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.097946 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.114272 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.127869 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.141824 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.146147 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.146201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.146221 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.146245 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.146264 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.159034 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.174995 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.190929 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.206464 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.223044 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.249788 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.249831 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.249845 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.249867 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.249882 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.351570 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.351611 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.351622 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.351660 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.351674 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.352512 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/1.log" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.356414 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.356953 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.375352 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.395158 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.414863 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.430873 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.438498 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.450032 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.452861 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.454724 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.454829 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.454856 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.454890 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.454923 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.472748 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.494893 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.518073 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.541858 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.556787 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.557846 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.557903 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.557916 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.557938 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.557954 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.585880 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.603195 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.619131 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.633321 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.659613 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.661244 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.661285 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.661301 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.661320 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.661335 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.675812 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.696242 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.713340 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.732636 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.747669 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.759381 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.763982 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.764039 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.764054 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.764081 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.764094 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.776687 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.796489 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.814904 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.837861 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.864710 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.867321 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.867371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.867386 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.867410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.867427 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.896993 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.909525 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:25 crc kubenswrapper[4900]: E1202 13:43:25.909702 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.926056 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.945456 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.963853 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.970143 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.970201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.970213 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.970231 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.970242 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:25Z","lastTransitionTime":"2025-12-02T13:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.979678 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:25 crc kubenswrapper[4900]: I1202 13:43:25.998563 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.014248 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:26Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.072341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.072386 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.072401 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.072425 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.072438 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.175940 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.176005 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.176022 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.176048 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.176069 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.279354 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.279429 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.279450 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.279479 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.279500 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.382633 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.382722 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.382742 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.382768 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.382790 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.485791 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.485857 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.485875 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.485927 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.485946 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.589257 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.589327 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.589346 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.589373 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.589393 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.693033 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.693102 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.693121 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.693149 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.693172 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.796772 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.796838 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.796856 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.796879 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.796898 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.901342 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.901462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.901482 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.901547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.901568 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:26Z","lastTransitionTime":"2025-12-02T13:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.909591 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.909680 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:26 crc kubenswrapper[4900]: I1202 13:43:26.909753 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:26 crc kubenswrapper[4900]: E1202 13:43:26.909904 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:26 crc kubenswrapper[4900]: E1202 13:43:26.910090 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:26 crc kubenswrapper[4900]: E1202 13:43:26.910240 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.005247 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.005306 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.005324 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.005358 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.005394 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.109015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.109076 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.109094 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.109120 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.109143 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.212994 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.213059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.213077 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.213105 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.213124 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.316326 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.316593 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.316616 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.316667 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.316687 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.371175 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/2.log" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.372345 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/1.log" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.378111 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0" exitCode=1 Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.378180 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.378249 4900 scope.go:117] "RemoveContainer" containerID="d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.382281 4900 scope.go:117] "RemoveContainer" containerID="4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0" Dec 02 13:43:27 crc kubenswrapper[4900]: E1202 13:43:27.382776 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.405854 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.419780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.419870 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.419895 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.419919 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.419976 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.424878 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.445585 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.472609 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.484737 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:27 crc kubenswrapper[4900]: E1202 13:43:27.484969 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:27 crc kubenswrapper[4900]: E1202 13:43:27.485067 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:43:43.485035158 +0000 UTC m=+68.900849049 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.498259 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.523552 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.523628 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.523681 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.523718 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.523744 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.531764 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.550817 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.573869 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.596550 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.620243 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.626977 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.627046 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.627063 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.627090 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.627109 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.642846 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.664329 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.689982 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.711991 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.730874 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.730931 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.730956 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.730987 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.731010 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.732046 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.747923 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.770498 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.834146 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.834217 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.834242 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.834272 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.834295 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.909115 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:27 crc kubenswrapper[4900]: E1202 13:43:27.909379 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.937845 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.937913 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.937932 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.937962 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:27 crc kubenswrapper[4900]: I1202 13:43:27.937982 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:27Z","lastTransitionTime":"2025-12-02T13:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.041982 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.042043 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.042060 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.042084 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.042103 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.146669 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.146769 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.146795 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.146833 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.146866 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.250042 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.250117 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.250136 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.250168 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.250192 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.353524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.353585 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.353602 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.353627 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.353712 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.384481 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/2.log" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.468606 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.469091 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.469124 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.469152 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.469173 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.571757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.571818 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.571835 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.571862 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.571881 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.675966 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.676050 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.676069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.676099 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.676127 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.779776 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.779828 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.779844 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.779867 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.779885 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.882821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.883193 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.883353 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.883527 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.883728 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.909516 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.909596 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.909631 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:28 crc kubenswrapper[4900]: E1202 13:43:28.909752 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:28 crc kubenswrapper[4900]: E1202 13:43:28.909871 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:28 crc kubenswrapper[4900]: E1202 13:43:28.909992 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.987806 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.987873 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.987900 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.987932 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:28 crc kubenswrapper[4900]: I1202 13:43:28.987954 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:28Z","lastTransitionTime":"2025-12-02T13:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.092147 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.092218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.092238 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.092264 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.092286 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.195050 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.195120 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.195146 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.195180 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.195207 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.298766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.298855 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.298878 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.298911 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.298941 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.402105 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.402175 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.402193 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.402218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.402241 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.505078 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.505141 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.505160 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.505186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.505206 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.608864 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.608953 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.608976 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.609004 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.609039 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.712018 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.712083 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.712103 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.712135 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.712158 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.815605 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.815723 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.815752 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.815780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.815801 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.909258 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:29 crc kubenswrapper[4900]: E1202 13:43:29.909497 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.919117 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.919209 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.919229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.919288 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:29 crc kubenswrapper[4900]: I1202 13:43:29.919310 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:29Z","lastTransitionTime":"2025-12-02T13:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.022203 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.022282 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.022308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.022339 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.022357 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.126263 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.126342 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.126367 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.126398 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.126417 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.229390 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.229457 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.229475 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.229501 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.229531 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.332805 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.332883 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.332902 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.332931 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.332953 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.436483 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.436626 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.436719 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.436760 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.436783 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.540346 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.540402 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.540423 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.540449 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.540467 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.643804 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.643880 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.643904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.643939 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.643961 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.747244 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.747321 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.747342 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.747371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.747392 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.851003 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.851077 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.851095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.851122 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.851145 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.909928 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.910015 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:30 crc kubenswrapper[4900]: E1202 13:43:30.910183 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.910220 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:30 crc kubenswrapper[4900]: E1202 13:43:30.910450 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:30 crc kubenswrapper[4900]: E1202 13:43:30.910540 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.954263 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.954336 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.954363 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.954400 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:30 crc kubenswrapper[4900]: I1202 13:43:30.954428 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:30Z","lastTransitionTime":"2025-12-02T13:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.058258 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.058337 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.058356 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.058385 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.058410 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.162409 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.162475 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.162494 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.162520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.162541 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.266498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.266561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.266579 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.266608 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.266627 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.371852 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.371920 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.371942 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.371967 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.371987 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.476024 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.476108 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.476132 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.476170 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.476197 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.579685 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.579744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.579766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.579794 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.579814 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.683319 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.683391 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.683416 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.683448 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.683473 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.787176 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.787241 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.787260 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.787285 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.787307 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.890870 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.890943 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.890973 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.891005 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.891031 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.917380 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:31 crc kubenswrapper[4900]: E1202 13:43:31.917623 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.994903 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.994974 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.994993 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.995020 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:31 crc kubenswrapper[4900]: I1202 13:43:31.995041 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:31Z","lastTransitionTime":"2025-12-02T13:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.099056 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.099144 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.099170 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.099201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.099225 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.203171 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.203228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.203241 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.203263 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.203277 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.310832 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.311238 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.311271 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.311306 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.311333 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.414368 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.414422 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.414434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.414481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.414494 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.517956 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.518003 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.518013 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.518030 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.518042 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.621750 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.621843 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.621866 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.621901 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.621920 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.724903 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.724977 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.724996 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.725023 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.725042 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.828754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.828841 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.828901 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.828936 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.828959 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.909196 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.909298 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:32 crc kubenswrapper[4900]: E1202 13:43:32.909425 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.909298 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:32 crc kubenswrapper[4900]: E1202 13:43:32.909592 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:32 crc kubenswrapper[4900]: E1202 13:43:32.909775 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.931915 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.931984 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.931995 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.932014 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:32 crc kubenswrapper[4900]: I1202 13:43:32.932031 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:32Z","lastTransitionTime":"2025-12-02T13:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.035458 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.035520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.035530 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.035553 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.035566 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.139350 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.139426 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.139444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.139469 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.139487 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.243613 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.243705 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.243724 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.243751 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.243769 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.347426 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.347499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.347516 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.347543 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.347563 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.451046 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.451149 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.451167 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.451194 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.451212 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.554695 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.554761 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.554780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.554806 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.554824 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.658059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.658131 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.658149 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.658178 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.658195 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.761481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.761578 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.761600 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.761632 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.761698 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.865254 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.865327 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.865347 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.865375 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.865398 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.910157 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:33 crc kubenswrapper[4900]: E1202 13:43:33.910471 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.968727 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.968875 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.968905 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.968943 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:33 crc kubenswrapper[4900]: I1202 13:43:33.968968 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:33Z","lastTransitionTime":"2025-12-02T13:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.072975 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.073033 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.073047 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.073065 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.073079 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.176072 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.176158 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.176167 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.176184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.176196 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.279388 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.279467 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.279492 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.279524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.279548 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.383385 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.383459 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.383481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.383519 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.383541 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.441720 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.441799 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.441823 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.441853 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.441874 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.465519 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.471448 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.471501 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.471520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.471542 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.471565 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.493701 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.499329 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.499440 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.499474 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.499503 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.499523 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.520006 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.526028 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.526095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.526116 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.526148 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.526168 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.548875 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.555019 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.555084 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.555104 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.555130 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.555151 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.576333 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.576804 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.580033 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.580088 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.580107 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.580134 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.580154 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.684445 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.684513 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.684526 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.684548 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.684582 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.787570 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.787690 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.787715 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.787747 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.787768 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.890955 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.891023 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.891040 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.891067 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.891086 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.909386 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.909440 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.909724 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.909869 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.910100 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:34 crc kubenswrapper[4900]: E1202 13:43:34.910166 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.932696 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.954473 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.973231 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.992811 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.994774 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.994868 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.994888 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.994960 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:34 crc kubenswrapper[4900]: I1202 13:43:34.994981 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:34Z","lastTransitionTime":"2025-12-02T13:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.017330 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.042604 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.076307 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41133c36885b59936d692b4e6a560282ef463129ddc2a2f30a054c48efa3110\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367057 6356 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:09.367187 6356 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:09.367752 6356 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:09.368718 6356 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:09.368741 6356 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:09.368776 6356 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:43:09.368785 6356 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:43:09.368808 6356 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:43:09.368840 6356 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 13:43:09.368853 6356 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:43:09.368862 6356 factory.go:656] Stopping watch factory\\\\nI1202 13:43:09.368882 6356 ovnkube.go:599] Stopped ovnkube\\\\nI1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.096729 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.099051 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.099107 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.099124 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.099150 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.099170 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.120288 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.141488 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.161975 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.184693 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.202766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.202818 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.202839 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.202864 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.202885 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.203714 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.226279 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.247051 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.266615 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.285890 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.306812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.306875 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.306895 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.306925 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.306945 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.409638 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.409733 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.409755 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.409783 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.409805 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.512610 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.512692 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.512708 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.512726 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.512740 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.616081 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.616163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.616184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.616212 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.616234 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.720637 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.720735 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.720753 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.720785 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.720809 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.824198 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.824851 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.824871 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.824898 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.824922 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.909444 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:35 crc kubenswrapper[4900]: E1202 13:43:35.909672 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.928746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.928815 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.928836 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.928865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:35 crc kubenswrapper[4900]: I1202 13:43:35.928892 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:35Z","lastTransitionTime":"2025-12-02T13:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.042831 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.044156 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.044442 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.044635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.044818 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.147406 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.147473 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.147493 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.147520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.147544 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.251140 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.251205 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.251221 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.251246 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.251263 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.354804 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.354926 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.354944 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.354970 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.354989 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.457969 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.458020 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.458035 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.458059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.458073 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.561580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.561693 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.561713 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.561741 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.561763 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.664609 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.664728 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.664752 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.664780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.664800 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.768056 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.768498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.768740 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.769134 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.769313 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.872728 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.873131 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.873282 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.873423 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.873567 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.909612 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.909694 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.909812 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:36 crc kubenswrapper[4900]: E1202 13:43:36.910200 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:36 crc kubenswrapper[4900]: E1202 13:43:36.910329 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:36 crc kubenswrapper[4900]: E1202 13:43:36.910494 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.976494 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.976561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.976578 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.976603 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:36 crc kubenswrapper[4900]: I1202 13:43:36.976621 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:36Z","lastTransitionTime":"2025-12-02T13:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.079163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.079205 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.079219 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.079238 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.079253 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.182368 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.182409 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.182420 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.182438 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.182451 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.285813 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.285868 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.285885 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.285909 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.285927 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.389376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.389412 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.389424 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.389441 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.389453 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.492183 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.492237 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.492255 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.492279 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.492297 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.595766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.595816 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.595838 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.595866 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.595882 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.699596 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.699699 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.699719 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.699744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.699762 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.802851 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.802918 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.802938 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.802962 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.802981 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.906237 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.906556 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.906749 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.906891 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.907008 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:37Z","lastTransitionTime":"2025-12-02T13:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:37 crc kubenswrapper[4900]: I1202 13:43:37.909576 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:37 crc kubenswrapper[4900]: E1202 13:43:37.909798 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.010744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.010793 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.010804 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.010821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.010832 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.113529 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.113595 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.113613 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.113667 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.113701 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.217125 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.217214 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.217232 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.217258 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.217277 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.320511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.320585 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.320608 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.320687 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.320714 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.423840 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.423893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.423909 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.423933 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.423950 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.526825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.526893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.526912 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.526941 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.526959 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.630557 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.630626 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.630686 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.630712 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.630730 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.733876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.733930 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.733941 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.733959 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.733974 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.836870 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.836944 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.836952 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.836969 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.836980 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.909940 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.909953 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:38 crc kubenswrapper[4900]: E1202 13:43:38.910148 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.910208 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:38 crc kubenswrapper[4900]: E1202 13:43:38.910839 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:38 crc kubenswrapper[4900]: E1202 13:43:38.911043 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.911256 4900 scope.go:117] "RemoveContainer" containerID="4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0" Dec 02 13:43:38 crc kubenswrapper[4900]: E1202 13:43:38.911526 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.930578 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.941319 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.941454 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.941472 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.941534 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.941554 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:38Z","lastTransitionTime":"2025-12-02T13:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.949015 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.963626 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.977671 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:38 crc kubenswrapper[4900]: I1202 13:43:38.993533 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.004511 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.014340 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.026184 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.041328 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.045076 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.045140 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.045154 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.045175 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.045189 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.062278 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.072397 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.084048 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.095988 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.107460 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.123557 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.139208 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.147453 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.147488 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.147497 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.147514 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.147525 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.154896 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.249843 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.249894 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.249906 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.249929 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.249942 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.352683 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.352747 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.352765 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.352789 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.352807 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.456704 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.456754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.456768 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.456786 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.456797 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.560080 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.560154 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.560177 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.560213 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.560241 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.663775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.663835 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.663848 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.663904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.663919 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.767916 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.767976 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.767990 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.768012 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.768024 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.871259 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.871332 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.871353 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.871382 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.871404 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.909892 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:39 crc kubenswrapper[4900]: E1202 13:43:39.910047 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.974820 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.974857 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.974869 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.974885 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:39 crc kubenswrapper[4900]: I1202 13:43:39.974896 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:39Z","lastTransitionTime":"2025-12-02T13:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.080424 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.080471 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.080481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.080499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.080512 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.183210 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.183249 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.183260 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.183276 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.183289 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.285894 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.285961 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.285980 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.286004 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.286024 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.388268 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.388306 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.388316 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.388333 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.388346 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.490525 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.490572 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.490583 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.490600 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.490612 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.593908 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.593983 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.594002 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.594035 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.594059 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.701837 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.701940 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.701966 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.702000 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.702029 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.805683 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.805766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.805788 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.805821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.805843 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.908158 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.908197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.908211 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.908229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.908244 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:40Z","lastTransitionTime":"2025-12-02T13:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.910105 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.910148 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:40 crc kubenswrapper[4900]: I1202 13:43:40.910199 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:40 crc kubenswrapper[4900]: E1202 13:43:40.910341 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:40 crc kubenswrapper[4900]: E1202 13:43:40.910425 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:40 crc kubenswrapper[4900]: E1202 13:43:40.910555 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.012607 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.012674 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.012689 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.012710 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.012725 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.116281 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.116358 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.116379 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.116411 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.116432 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.219812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.219865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.219879 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.219900 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.219916 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.323553 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.323628 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.323674 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.323731 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.323750 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.426584 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.426622 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.426631 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.426662 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.426672 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.529315 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.529359 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.529372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.529392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.529406 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.632279 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.632333 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.632353 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.632379 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.632402 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.735815 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.735883 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.735901 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.735927 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.735946 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.839746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.839807 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.839821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.839844 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.839863 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.920150 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:41 crc kubenswrapper[4900]: E1202 13:43:41.920377 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.943222 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.943284 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.943305 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.943338 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:41 crc kubenswrapper[4900]: I1202 13:43:41.943362 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:41Z","lastTransitionTime":"2025-12-02T13:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.046323 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.046401 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.046427 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.046461 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.046485 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.150079 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.150140 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.150157 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.150184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.150204 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.252900 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.252953 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.252970 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.252991 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.253006 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.355686 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.355745 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.355764 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.355792 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.355812 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.457847 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.457890 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.457902 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.457921 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.457935 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.560702 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.560734 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.560745 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.560758 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.560770 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.663616 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.663714 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.663738 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.663766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.663793 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.766994 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.767059 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.767083 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.767106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.767124 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.870024 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.870046 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.870054 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.870066 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.870073 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.909901 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.909949 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:42 crc kubenswrapper[4900]: E1202 13:43:42.910113 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:42 crc kubenswrapper[4900]: E1202 13:43:42.910287 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.910568 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:42 crc kubenswrapper[4900]: E1202 13:43:42.910732 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.972456 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.972517 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.972534 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.972554 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:42 crc kubenswrapper[4900]: I1202 13:43:42.972570 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:42Z","lastTransitionTime":"2025-12-02T13:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.076050 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.076112 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.076136 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.076163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.076185 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.184090 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.184256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.184269 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.184688 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.184777 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.287940 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.288001 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.288021 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.288045 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.288063 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.391793 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.391843 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.391860 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.391884 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.391902 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.456962 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/0.log" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.457049 4900 generic.go:334] "Generic (PLEG): container finished" podID="7cacd7d0-a1a1-4ea0-b918-a73c8220e500" containerID="7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a" exitCode=1 Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.457103 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerDied","Data":"7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.457853 4900 scope.go:117] "RemoveContainer" containerID="7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.479101 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.494607 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.494679 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.494697 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.494722 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.494741 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.498470 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.513275 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.531982 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.536850 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:43 crc kubenswrapper[4900]: E1202 13:43:43.537033 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:43 crc kubenswrapper[4900]: E1202 13:43:43.537160 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:15.537126043 +0000 UTC m=+100.952939924 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.554052 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.568514 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.584822 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.597069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.597111 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.597124 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.597142 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.597155 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.608529 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.631135 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.645080 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.672851 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.687977 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.699729 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.699790 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.699803 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.699825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.699842 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.707196 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.727261 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.746902 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.763880 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.780879 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.803220 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.803267 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.803281 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.803299 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.803312 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.906635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.906702 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.906712 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.906729 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.906745 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:43Z","lastTransitionTime":"2025-12-02T13:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:43 crc kubenswrapper[4900]: I1202 13:43:43.909872 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:43 crc kubenswrapper[4900]: E1202 13:43:43.910022 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.009684 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.009726 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.009737 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.009759 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.009769 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.113135 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.113205 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.113224 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.113252 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.113272 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.216169 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.216216 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.216227 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.216246 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.216257 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.320075 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.320121 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.320132 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.320150 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.320161 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.423418 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.423464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.423472 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.423492 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.423504 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.462839 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/0.log" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.462922 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerStarted","Data":"ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.479737 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.501243 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.517924 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.527015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.527085 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.527104 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.527129 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.527147 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.530202 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.547143 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.566271 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.587532 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.618569 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.630625 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.630698 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.630712 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.630732 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.630746 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.636538 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.654222 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.669813 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.690696 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.711107 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.727531 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.733460 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.733506 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.733516 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.733533 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.733546 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.747910 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.769038 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.771532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.771601 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.771619 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.771676 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.771696 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.783355 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.792004 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.796425 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.796687 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.796882 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.797038 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.797171 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.817808 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.822392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.822451 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.822464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.822487 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.822502 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.841996 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.846861 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.847087 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.847240 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.847420 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.847552 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.863557 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.867176 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.867204 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.867214 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.867230 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.867242 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.880325 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.880441 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.882141 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.882164 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.882172 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.882187 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.882198 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.909766 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.909922 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.910100 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.910157 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.910542 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:44 crc kubenswrapper[4900]: E1202 13:43:44.910683 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.933273 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.954561 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.982056 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.985042 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.985304 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.985466 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.985616 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.985777 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:44Z","lastTransitionTime":"2025-12-02T13:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:44 crc kubenswrapper[4900]: I1202 13:43:44.997527 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.020702 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.038184 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.056824 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.078329 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.088795 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.088834 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.088843 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.088859 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.088873 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.100092 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.119710 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.139224 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.156068 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.174675 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.191293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.191711 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.191909 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.192097 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.192268 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.195618 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.210920 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.231454 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.255587 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.295420 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.295477 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.295497 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.295524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.295544 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.398597 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.398695 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.398715 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.398744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.398765 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.501531 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.501953 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.502132 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.502281 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.502497 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.606223 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.606286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.606303 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.606328 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.606349 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.709865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.709919 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.709936 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.709958 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.709973 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.813415 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.813483 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.813503 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.813531 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.813552 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.909966 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:45 crc kubenswrapper[4900]: E1202 13:43:45.910194 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.916977 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.917028 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.917046 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.917071 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:45 crc kubenswrapper[4900]: I1202 13:43:45.917090 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:45Z","lastTransitionTime":"2025-12-02T13:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.019444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.019519 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.019536 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.019564 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.019584 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.122758 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.122817 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.122834 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.122858 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.122880 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.226139 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.226209 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.226229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.226257 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.226288 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.328293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.328339 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.328350 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.328366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.328379 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.430844 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.430908 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.430920 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.430939 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.430953 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.538621 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.538759 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.538780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.538891 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.538917 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.641327 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.641372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.641382 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.641400 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.641412 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.743980 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.744062 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.744085 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.744121 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.744146 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.847595 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.847705 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.847726 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.847756 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.847778 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.909592 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.909740 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:46 crc kubenswrapper[4900]: E1202 13:43:46.909800 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.909855 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:46 crc kubenswrapper[4900]: E1202 13:43:46.909973 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:46 crc kubenswrapper[4900]: E1202 13:43:46.910058 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.950275 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.950313 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.950325 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.950341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:46 crc kubenswrapper[4900]: I1202 13:43:46.950354 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:46Z","lastTransitionTime":"2025-12-02T13:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.053174 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.053229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.053246 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.053273 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.053293 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.156431 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.156503 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.156520 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.156546 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.156565 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.259767 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.259828 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.259848 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.259873 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.259892 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.363005 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.363055 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.363072 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.363097 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.363139 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.465620 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.465725 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.465746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.465775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.465797 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.568832 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.568904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.568923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.568957 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.568985 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.672347 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.672410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.672429 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.672458 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.672476 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.775306 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.775356 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.775372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.775397 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.775417 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.879321 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.879391 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.879409 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.879437 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.879461 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.909101 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:47 crc kubenswrapper[4900]: E1202 13:43:47.909329 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.983579 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.983671 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.983692 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.983730 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:47 crc kubenswrapper[4900]: I1202 13:43:47.983751 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:47Z","lastTransitionTime":"2025-12-02T13:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.086638 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.086738 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.086757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.086784 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.086803 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.190132 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.190186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.190202 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.190226 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.190244 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.293231 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.293295 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.293314 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.293341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.293360 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.397232 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.397293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.397311 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.397336 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.397354 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.499758 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.499830 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.499853 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.499879 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.499940 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.604163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.604221 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.604232 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.604252 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.604266 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.708231 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.708303 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.708322 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.708349 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.708369 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.812009 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.812093 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.812111 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.812137 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.812159 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.909106 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.909230 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.909459 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:48 crc kubenswrapper[4900]: E1202 13:43:48.909470 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:48 crc kubenswrapper[4900]: E1202 13:43:48.909694 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:48 crc kubenswrapper[4900]: E1202 13:43:48.909847 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.923217 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.923289 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.923317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.923350 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:48 crc kubenswrapper[4900]: I1202 13:43:48.923377 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:48Z","lastTransitionTime":"2025-12-02T13:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.026532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.026614 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.026635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.026739 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.026763 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.130395 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.130462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.130481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.130509 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.130530 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.234113 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.234180 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.234201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.234233 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.234254 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.338261 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.338364 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.338390 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.338426 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.338450 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.441781 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.441840 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.441859 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.441885 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.441906 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.545000 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.545079 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.545106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.545145 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.545174 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.648901 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.648963 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.648980 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.649003 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.649022 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.752757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.752827 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.752846 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.752872 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.752891 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.856944 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.857008 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.857028 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.857052 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.857074 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.909466 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:49 crc kubenswrapper[4900]: E1202 13:43:49.909755 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.961391 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.961474 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.961500 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.961533 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:49 crc kubenswrapper[4900]: I1202 13:43:49.961554 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:49Z","lastTransitionTime":"2025-12-02T13:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.065430 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.065496 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.065514 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.065540 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.065559 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.168825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.168897 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.168918 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.168947 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.168967 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.272316 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.272376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.272401 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.272427 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.272446 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.376035 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.376125 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.376149 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.376189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.376214 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.479301 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.479342 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.479354 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.479371 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.479383 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.583317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.583407 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.583432 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.583469 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.583488 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.687372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.687448 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.687469 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.687496 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.687517 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.791195 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.791269 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.791286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.791311 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.791334 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.894141 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.894199 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.894218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.894247 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.894267 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.909769 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.909788 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:50 crc kubenswrapper[4900]: E1202 13:43:50.909969 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.909797 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:50 crc kubenswrapper[4900]: E1202 13:43:50.910101 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:50 crc kubenswrapper[4900]: E1202 13:43:50.910179 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.997996 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.998055 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.998071 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.998101 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:50 crc kubenswrapper[4900]: I1202 13:43:50.998119 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:50Z","lastTransitionTime":"2025-12-02T13:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.101958 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.104023 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.104054 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.104106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.104131 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.208045 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.208114 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.208138 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.208166 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.208189 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.311437 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.311522 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.311546 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.311580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.311605 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.416228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.416319 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.416347 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.416382 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.416408 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.519729 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.519788 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.519806 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.519833 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.519852 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.623630 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.623734 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.623754 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.623780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.623800 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.727479 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.727583 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.727603 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.727630 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.727684 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.830849 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.830907 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.830925 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.830950 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.830970 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.909516 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:51 crc kubenswrapper[4900]: E1202 13:43:51.909803 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.924395 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.933304 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.933400 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.933419 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.933442 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:51 crc kubenswrapper[4900]: I1202 13:43:51.933459 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:51Z","lastTransitionTime":"2025-12-02T13:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.036582 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.036631 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.036672 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.036694 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.036712 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.140492 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.140554 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.140571 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.140600 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.140620 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.245453 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.245522 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.245549 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.245580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.245602 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.349747 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.349815 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.349834 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.349864 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.349886 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.453317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.453379 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.453398 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.453423 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.453443 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.556334 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.556398 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.556417 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.556445 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.556470 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.659452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.659496 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.659508 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.659532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.659544 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.763109 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.763174 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.763188 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.763210 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.763225 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.866402 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.866464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.866484 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.866511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.866532 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.909357 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.909551 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:52 crc kubenswrapper[4900]: E1202 13:43:52.909724 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.909745 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:52 crc kubenswrapper[4900]: E1202 13:43:52.909922 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:52 crc kubenswrapper[4900]: E1202 13:43:52.910238 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.970301 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.970365 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.970386 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.970416 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:52 crc kubenswrapper[4900]: I1202 13:43:52.970436 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:52Z","lastTransitionTime":"2025-12-02T13:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.074700 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.074794 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.074812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.074838 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.074857 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.179071 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.179121 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.179139 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.179164 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.179182 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.282984 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.283055 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.283074 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.283103 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.283478 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.394468 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.394530 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.394559 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.394594 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.394617 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.498100 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.498180 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.498201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.498757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.498825 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.603746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.603802 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.603819 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.603842 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.603861 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.707488 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.707588 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.707606 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.707684 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.707739 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.811106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.811173 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.811192 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.811219 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.811242 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.909125 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:53 crc kubenswrapper[4900]: E1202 13:43:53.909364 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.910466 4900 scope.go:117] "RemoveContainer" containerID="4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.913588 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.913676 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.913701 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.913727 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:53 crc kubenswrapper[4900]: I1202 13:43:53.913748 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:53Z","lastTransitionTime":"2025-12-02T13:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.018164 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.018238 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.018256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.018288 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.018307 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.121821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.121900 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.121917 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.121949 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.121970 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.225322 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.225396 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.225424 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.225460 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.225485 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.328536 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.328585 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.328598 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.328617 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.328635 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.431836 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.431910 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.431935 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.431963 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.431986 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.515150 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/2.log" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.518946 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.520578 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.534964 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.535014 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.535032 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.535054 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.535071 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.543574 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.564080 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.577737 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.589181 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.608455 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.625745 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2da7413-f8f7-4a85-8c37-6ebf91a75b0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b6c26f99ca34eb2a84a471e4a5ba769d4c89f6d7f4656d50865c4893de6d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.638376 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.638410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.638424 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.638445 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.638457 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.643847 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.669487 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.690494 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.707568 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.725536 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.740853 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.740895 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.740908 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.740935 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.740950 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.745501 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.761188 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.780699 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.798326 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.815320 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.829752 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.844064 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.844134 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.844153 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.844181 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.844204 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.847804 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.909106 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.909174 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.909113 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:54 crc kubenswrapper[4900]: E1202 13:43:54.909349 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:54 crc kubenswrapper[4900]: E1202 13:43:54.909789 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:54 crc kubenswrapper[4900]: E1202 13:43:54.909899 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.933322 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.947454 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.947512 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.947535 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.947564 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.947586 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:54Z","lastTransitionTime":"2025-12-02T13:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.952406 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.973571 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:54 crc kubenswrapper[4900]: I1202 13:43:54.988553 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2da7413-f8f7-4a85-8c37-6ebf91a75b0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b6c26f99ca34eb2a84a471e4a5ba769d4c89f6d7f4656d50865c4893de6d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.010861 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.027067 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.035839 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.035884 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.035898 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.035918 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.035932 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.043864 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.051613 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.056583 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.056633 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.056661 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.056686 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.056701 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.064236 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.075927 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.083417 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.083547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.083572 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.083606 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.083669 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.088565 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.101746 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.106755 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.106804 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.106824 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.106851 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.106872 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.112927 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.127284 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.133083 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.133144 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.133159 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.133182 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.133208 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.139068 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.157436 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.157724 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.160462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.160549 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.160574 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.160609 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.160632 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.167704 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.184575 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.207380 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.229552 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.250276 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.264107 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.264186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.264212 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.264249 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.264278 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.273250 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.289491 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.368613 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.368713 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.368733 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.368762 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.368782 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.472138 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.472189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.472202 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.472223 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.472239 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.525820 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/3.log" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.526905 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/2.log" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.532361 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" exitCode=1 Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.532414 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.532486 4900 scope.go:117] "RemoveContainer" containerID="4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.533831 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.534149 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.557655 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.575137 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.575204 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.575226 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.575255 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.575275 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.579584 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.600443 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.623849 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.642398 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.664235 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.679100 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.679189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.679209 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.679242 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.679265 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.686318 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.705370 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.731197 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.749882 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2da7413-f8f7-4a85-8c37-6ebf91a75b0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b6c26f99ca34eb2a84a471e4a5ba769d4c89f6d7f4656d50865c4893de6d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.770561 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.783266 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.783348 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.783366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.783396 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.783417 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.790849 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.808778 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.828054 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.852319 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.875040 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.887490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.887547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.887561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.887607 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.887625 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.909315 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:55 crc kubenswrapper[4900]: E1202 13:43:55.909546 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.909467 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bee9ca27e60026813095a3f398df7efe5c6f09ff8fd4967ea0457c948830ac0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:26Z\\\",\\\"message\\\":\\\"tes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:26.035350 6581 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035408 6581 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:26.035499 6581 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:43:26.035801 6581 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036244 6581 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:26.036679 6581 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:43:26.036713 6581 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:43:26.036750 6581 factory.go:656] Stopping watch factory\\\\nI1202 13:43:26.036767 6581 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:43:26.036795 6581 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.984561 6943 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:43:54.984598 6943 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:54.984299 6943 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.984786 6943 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.986151 6943 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:54.987352 6943 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:54.987501 6943 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.987592 6943 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.929036 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.991459 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.991540 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.991561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.991587 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:55 crc kubenswrapper[4900]: I1202 13:43:55.991608 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:55Z","lastTransitionTime":"2025-12-02T13:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.094708 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.094780 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.094800 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.094832 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.094851 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.197543 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.197637 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.197686 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.197718 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.197741 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.301927 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.301995 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.302015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.302046 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.302067 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.406501 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.407008 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.407028 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.407060 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.407078 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.510812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.510901 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.510923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.510951 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.510975 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.540876 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/3.log" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.546904 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.547183 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.566493 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.590201 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.613381 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.615757 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.615826 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.615845 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.615874 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.615896 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.649530 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.984561 6943 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:43:54.984598 6943 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:54.984299 6943 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.984786 6943 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.986151 6943 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:54.987352 6943 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:54.987501 6943 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.987592 6943 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.674210 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.695180 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.720197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.720264 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.720282 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.720313 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.720335 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.724586 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.747012 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.770033 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.788511 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.808603 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.824011 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.824070 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.824089 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.824115 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.824135 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.828781 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.838082 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838297 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.838258922 +0000 UTC m=+146.254072803 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.838405 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.838453 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.838499 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.838601 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838748 4900 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838789 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838837 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838851 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838868 4900 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838880 4900 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838892 4900 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838801 4900 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.838858 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.838828028 +0000 UTC m=+146.254641919 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.839172 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.839125377 +0000 UTC m=+146.254939268 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.839220 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.839203079 +0000 UTC m=+146.255016960 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.839253 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.83924011 +0000 UTC m=+146.255053991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.848624 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.869533 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.894960 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.909424 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.909692 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.910262 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.910442 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.910502 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:56 crc kubenswrapper[4900]: E1202 13:43:56.910705 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.915874 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2da7413-f8f7-4a85-8c37-6ebf91a75b0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b6c26f99ca34eb2a84a471e4a5ba769d4c89f6d7f4656d50865c4893de6d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.927533 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.927588 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.927607 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.927635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.927681 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:56Z","lastTransitionTime":"2025-12-02T13:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.936280 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:56 crc kubenswrapper[4900]: I1202 13:43:56.961188 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:43:56Z is after 2025-08-24T17:21:41Z" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.031863 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.031929 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.031950 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.031975 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.031994 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.135098 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.135163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.135182 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.135263 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.135297 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.238635 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.238748 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.238812 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.238837 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.238890 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.342187 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.342286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.342307 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.342349 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.342371 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.445688 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.445748 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.445766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.445790 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.445809 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.549935 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.550083 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.550170 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.550259 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.550343 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.653713 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.653785 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.653811 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.653844 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.653867 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.757133 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.757189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.757203 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.757229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.757246 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.860724 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.860785 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.860806 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.860874 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.860895 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.909331 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:57 crc kubenswrapper[4900]: E1202 13:43:57.909786 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.963732 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.963797 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.963817 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.963843 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:57 crc kubenswrapper[4900]: I1202 13:43:57.963868 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:57Z","lastTransitionTime":"2025-12-02T13:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.067504 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.067578 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.067596 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.067622 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.067640 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.170796 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.170874 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.170892 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.170923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.170941 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.274547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.274620 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.274681 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.274719 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.274743 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.379021 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.379116 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.379138 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.379173 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.379197 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.483047 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.483098 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.483114 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.483137 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.483155 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.585961 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.586022 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.586040 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.586067 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.586088 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.689778 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.689857 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.689878 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.689903 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.689922 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.794256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.794327 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.794360 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.794396 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.794414 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.897821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.897886 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.897904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.897929 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.897948 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:58Z","lastTransitionTime":"2025-12-02T13:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.909615 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:43:58 crc kubenswrapper[4900]: E1202 13:43:58.910077 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.910192 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.910212 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:43:58 crc kubenswrapper[4900]: E1202 13:43:58.910610 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:43:58 crc kubenswrapper[4900]: E1202 13:43:58.910840 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:43:58 crc kubenswrapper[4900]: I1202 13:43:58.950039 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.000953 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.001003 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.001018 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.001040 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.001058 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.104679 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.104750 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.104774 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.104809 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.104831 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.207899 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.207963 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.207980 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.208003 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.208022 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.312452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.312524 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.312547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.312576 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.312597 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.415793 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.416211 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.416370 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.416512 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.416633 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.520047 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.520139 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.520158 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.520194 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.520213 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.624408 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.624489 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.624514 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.624547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.624570 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.728791 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.728865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.728886 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.728917 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.728937 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.833114 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.833169 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.833189 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.833214 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.833234 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.909522 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:43:59 crc kubenswrapper[4900]: E1202 13:43:59.910108 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.936567 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.936687 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.936708 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.936739 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:43:59 crc kubenswrapper[4900]: I1202 13:43:59.936758 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:43:59Z","lastTransitionTime":"2025-12-02T13:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.040086 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.040864 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.041008 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.041141 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.041271 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.145260 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.145588 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.145609 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.145674 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.145694 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.249055 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.249389 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.249535 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.249724 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.249888 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.353624 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.354142 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.354297 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.354484 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.354633 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.457372 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.457452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.457472 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.457499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.457520 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.561466 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.561533 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.561550 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.561574 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.561596 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.665105 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.665185 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.665204 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.665233 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.665253 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.769010 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.769095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.769123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.769156 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.769183 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.872719 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.872777 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.872800 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.872824 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.872841 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.909734 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.909849 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:00 crc kubenswrapper[4900]: E1202 13:44:00.909975 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.910057 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:00 crc kubenswrapper[4900]: E1202 13:44:00.910219 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:00 crc kubenswrapper[4900]: E1202 13:44:00.910391 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.976982 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.977040 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.977057 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.977081 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:00 crc kubenswrapper[4900]: I1202 13:44:00.977100 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:00Z","lastTransitionTime":"2025-12-02T13:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.080828 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.080899 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.080917 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.080946 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.080969 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.184418 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.184491 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.184511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.184539 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.184562 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.288084 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.288164 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.288188 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.288217 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.288235 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.391984 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.392044 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.392062 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.392088 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.392108 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.495177 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.495228 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.495245 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.495268 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.495288 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.605930 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.606022 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.606042 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.606075 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.606098 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.709366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.709444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.709464 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.709492 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.709514 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.813279 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.813351 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.813366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.813391 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.813408 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.909747 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:01 crc kubenswrapper[4900]: E1202 13:44:01.910058 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.917447 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.917517 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.917540 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.917571 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:01 crc kubenswrapper[4900]: I1202 13:44:01.917592 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:01Z","lastTransitionTime":"2025-12-02T13:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.021093 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.021148 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.021161 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.021183 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.021200 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.125103 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.125180 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.125206 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.125236 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.125256 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.228348 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.228399 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.228410 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.228427 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.228439 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.332378 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.332451 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.332469 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.332498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.332520 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.436363 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.436423 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.436444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.436467 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.436482 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.540037 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.540106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.540122 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.540144 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.540156 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.643118 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.643184 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.643197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.643220 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.643235 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.747266 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.747333 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.747354 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.747379 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.747400 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.850290 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.850403 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.850422 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.850450 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.850469 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.909480 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.909615 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:02 crc kubenswrapper[4900]: E1202 13:44:02.909783 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.909885 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:02 crc kubenswrapper[4900]: E1202 13:44:02.910072 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:02 crc kubenswrapper[4900]: E1202 13:44:02.910294 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.954179 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.954233 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.954251 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.954273 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:02 crc kubenswrapper[4900]: I1202 13:44:02.954292 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:02Z","lastTransitionTime":"2025-12-02T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.058317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.058413 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.058433 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.058462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.058485 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.161971 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.162037 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.162057 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.162082 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.162102 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.265401 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.265483 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.265506 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.265537 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.265556 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.368751 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.368826 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.368846 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.368879 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.368902 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.471969 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.472051 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.472069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.472099 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.472118 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.575534 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.575639 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.575706 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.575744 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.575773 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.679405 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.679477 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.679494 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.679518 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.679538 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.783872 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.783965 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.783998 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.784041 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.784117 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.887078 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.887146 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.887163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.887190 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.887209 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.909717 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:03 crc kubenswrapper[4900]: E1202 13:44:03.909925 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.991303 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.991363 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.991387 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.991415 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:03 crc kubenswrapper[4900]: I1202 13:44:03.991436 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:03Z","lastTransitionTime":"2025-12-02T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.094302 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.094398 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.094421 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.094450 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.094472 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.198099 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.198163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.198186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.198217 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.198239 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.301611 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.301739 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.301763 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.301791 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.301808 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.412782 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.412856 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.412876 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.412904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.412925 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.516591 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.516697 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.516716 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.516743 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.516761 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.620392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.620480 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.620507 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.620547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.620575 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.723822 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.723872 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.723889 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.723912 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.723931 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.827678 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.827775 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.827797 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.827828 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.827847 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.909995 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.910033 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:04 crc kubenswrapper[4900]: E1202 13:44:04.910310 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.910513 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:04 crc kubenswrapper[4900]: E1202 13:44:04.910733 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:04 crc kubenswrapper[4900]: E1202 13:44:04.910929 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.928445 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff678ac7-9ffd-4ca7-a1c4-e740d021feaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88816eecbdfbb9f03cf6add01c34295fe4fbdc12833a76ef2461c0a904955e79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5be192af8047e33db93ba7c2604c3277bab8667b8ace6cd4fa94983bebb83cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4453c9c907a2fe02d07e3b22338d63f7c42e12543c7fc6570aec701910807f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.930907 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.930944 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.930953 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.930970 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.930983 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:04Z","lastTransitionTime":"2025-12-02T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.945790 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3021e815e20c6eb684627c66ce3ba928548dd19badc2fb4671f9bf5e42f76607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.975783 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338f7f04-2450-4efb-a2e7-3c0e13eb8998\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:55Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.984561 6943 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:43:54.984598 6943 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:54.984299 6943 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.984786 6943 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.986151 6943 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:43:54.987352 6943 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:43:54.987501 6943 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:43:54.987592 6943 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z4d72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-88rnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:04 crc kubenswrapper[4900]: I1202 13:44:04.991962 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p8tll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e94f1e8-0edf-4550-bf19-da9690ade27d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a686ecfedea330c68efd30bd7615faa8d349b50018ae8ce647cc38c94af4386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khptv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p8tll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.010468 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08de31-accc-4b2b-aac7-20e947009eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b241b40c90c525e0a3534f6205bb9d75b22a085447778aca425e949760c3e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778c6f5948b41a51d7f1fbcd873da88d6e4f575b3e8bb9084e4f1ca3ac6eb3d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dxxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsnck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.032484 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.032511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.032521 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.032537 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.032548 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.035283 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49f275c1-19ff-4729-9cb5-736ec1525302\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 13:42:47.357495 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:42:47.359553 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1724295296/tls.crt::/tmp/serving-cert-1724295296/tls.key\\\\\\\"\\\\nI1202 13:42:52.821073 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 13:42:52.829387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 13:42:52.829431 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 13:42:52.829491 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 13:42:52.829503 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 13:42:52.839034 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 13:42:52.839078 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 13:42:52.839098 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 13:42:52.839105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 13:42:52.839111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 13:42:52.839116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1202 13:42:52.839228 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1202 13:42:52.842554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.054973 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d8c753-0fc1-4463-acba-77d2c9cc1323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcd7545f62f8184d53bedb892316bfd7e42d6d604dd9d04772b321548fb4821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e09c078a47a6f704d19499d0e9feb8d866153e8869a8934b5d54a7a74cea85f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f8bfc8600e22aeedb12cb63883109bf680a9e91ca634080bac59848b85ee47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12469f19867b2d79370985eac510499c8456d5bff46cb96b06f0b30c04765396\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.072982 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad3bf889b8f134ced578db560497298135be4e17edd15f944c5a915f576bf18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.094698 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r8pv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cacd7d0-a1a1-4ea0-b918-a73c8220e500\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:43:43Z\\\",\\\"message\\\":\\\"2025-12-02T13:42:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c\\\\n2025-12-02T13:42:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ebdda2d1-be21-4058-b958-21c35527695c to /host/opt/cni/bin/\\\\n2025-12-02T13:42:58Z [verbose] multus-daemon started\\\\n2025-12-02T13:42:58Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:43:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9wvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r8pv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.129793 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e77695-8d92-4d34-a405-a15e5549124b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e242ab9d47259a5fc883ba2497247fcc0a3287743024fa85ecc7ca85e79ff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://418688a018c3914b70b23c4d970cb615fa324a9c96f315bc16b745933c319fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a972d1403b1e986ffe153a1c759d2f43664b03431d927c18c98ad2b06389cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d4ef51b446368161d9b6ddb8f7c6ba4c61a6127bad7aabf9f5605cc004bfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e4940b5eda6fcf6f9ac59ba70912ea575959193355f9890d3cccfc40764b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21ee6566e0b5583b6716e1a6bbea2a90a1e8d180976f95562c6faa6adfde218b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21ee6566e0b5583b6716e1a6bbea2a90a1e8d180976f95562c6faa6adfde218b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ad120ec36b820587011bac6b3285fb7a917dc2d375dda0131a4b3a2f0e5d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01ad120ec36b820587011bac6b3285fb7a917dc2d375dda0131a4b3a2f0e5d0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a17a0448f21ab5931068a468d29fa98efaa96991dca979de162ca47c0511a608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a17a0448f21ab5931068a468d29fa98efaa96991dca979de162ca47c0511a608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.135968 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.136036 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.136058 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.136084 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.136103 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.156701 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e6b3ac8079733773364c579a5c4a709603bac90b5a5a78b41fc889ed79402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bff0682b4d7d9887134d07d9afb770fdff0d8e4f04fd0dc804ddd0cab8c33f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.178709 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.200013 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c63b5f6-db87-48a2-b87e-5442db707843\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jm8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:43:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kzhwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.221045 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5699cd4c5c5dfa2bc44e6683cc62acc719dabc1f8b60a27167c1da7ba7dd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-487wr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ngwgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.239582 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.239698 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.239747 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.239773 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.239790 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.244932 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57723040-ba7b-43ac-99c5-234dac2c90ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2d3b0a02acfd1015cbf3aa0c9cd911582776e6f304c932db21ca55450c79dd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:43:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0ad35a0fc10cd40a79f9dc3b1a6983c6ec7b34e3bdc832c56149567db8499c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787a079f0b1c54c5ae36ed6a43a78753aa53dd58818d4b7d95d5c2c0a188a592\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64247a5f3bf394474b843fca6d05cbcd0f3c8eee9b742cf65673ae74564650d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ed4afef75d04823081775de0a5fa94d4e47c885aec5dea529c11ce0160b3c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd35da96760f1b9343a65b65320b2004225896f73ed87e89112e030ce586cbbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25b2f528de694328ecdb107a207eacdc028f0334cd9876784aa470f6d2f9502a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:43:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqb94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckvw2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.262036 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2da7413-f8f7-4a85-8c37-6ebf91a75b0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78b6c26f99ca34eb2a84a471e4a5ba769d4c89f6d7f4656d50865c4893de6d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d344d6ca44d333fbf965dce3b5cdfbc0190dfcd99ce88569594690089fa15979\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:42:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:42:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.281829 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.301862 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.320330 4900 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5x7v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fc9e986-c2f6-4fac-b61c-de2ef11882c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b82bb6e22d49a4a2d0f93659ac5cc91e2e8fc77de57ffa7332a97948ea2823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72d7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:42:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5x7v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.342740 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.342897 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.342926 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.343070 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.343103 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.438857 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.438893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.438902 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.438919 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.438930 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.459695 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.465279 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.465360 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.465380 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.465409 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.465431 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.484467 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.489308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.489357 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.489378 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.489402 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.489420 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.516778 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.523256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.523311 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.523331 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.523361 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.523380 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.540395 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.546842 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.546897 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.546917 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.546944 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.546960 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.568444 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0634cfab-4708-456e-8fb1-d034c189ea37\\\",\\\"systemUUID\\\":\\\"67abec4e-a00c-4d58-8a63-f5484bdca5e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.568702 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.571442 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.571518 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.571539 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.571568 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.571589 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.675454 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.675518 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.675535 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.675564 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.675586 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.778822 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.778908 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.778930 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.778961 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.778981 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.881837 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.881907 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.881932 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.882036 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.882123 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.909115 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:05 crc kubenswrapper[4900]: E1202 13:44:05.909388 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.986926 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.987039 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.987098 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.987126 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:05 crc kubenswrapper[4900]: I1202 13:44:05.987183 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:05Z","lastTransitionTime":"2025-12-02T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.090536 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.090637 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.090697 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.090725 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.090744 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.193360 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.193462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.193490 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.193526 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.193552 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.297240 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.297325 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.297343 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.297377 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.297405 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.400937 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.400978 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.400988 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.401005 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.401018 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.504344 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.504417 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.504441 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.504472 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.504497 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.608508 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.608599 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.608623 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.608684 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.608710 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.712732 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.712816 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.712833 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.712861 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.712880 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.816717 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.816791 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.816810 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.816839 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.816861 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.909968 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.910054 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.909997 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:06 crc kubenswrapper[4900]: E1202 13:44:06.910234 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:06 crc kubenswrapper[4900]: E1202 13:44:06.910479 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:06 crc kubenswrapper[4900]: E1202 13:44:06.910582 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.919865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.919923 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.919941 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.919971 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:06 crc kubenswrapper[4900]: I1202 13:44:06.919991 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:06Z","lastTransitionTime":"2025-12-02T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.023682 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.023749 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.023765 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.023801 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.023822 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.127432 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.127526 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.127549 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.127580 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.127599 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.231123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.231186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.231203 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.231231 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.231249 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.335617 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.335797 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.335825 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.335856 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.335893 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.440090 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.440155 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.440173 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.440197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.440217 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.544234 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.544299 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.544315 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.544341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.544371 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.647407 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.647474 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.647491 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.647516 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.647535 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.750916 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.750992 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.751017 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.751052 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.751076 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.855221 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.855286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.855303 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.855330 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.855354 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.909096 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:07 crc kubenswrapper[4900]: E1202 13:44:07.909694 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.958502 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.958568 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.958583 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.958611 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:07 crc kubenswrapper[4900]: I1202 13:44:07.958632 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:07Z","lastTransitionTime":"2025-12-02T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.062487 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.062567 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.062584 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.062617 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.062636 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.165810 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.165893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.165904 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.165928 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.165944 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.269591 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.269713 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.269736 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.269767 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.269790 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.373344 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.373453 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.373499 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.373539 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.373565 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.476444 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.476561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.476585 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.476613 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.476636 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.580122 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.580197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.580222 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.580251 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.580274 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.683227 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.683326 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.683385 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.683443 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.683462 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.786840 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.786914 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.786942 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.786972 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.786995 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.890068 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.890155 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.890179 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.890210 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.890238 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.909456 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.909521 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.909456 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:08 crc kubenswrapper[4900]: E1202 13:44:08.909604 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:08 crc kubenswrapper[4900]: E1202 13:44:08.909891 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:08 crc kubenswrapper[4900]: E1202 13:44:08.909960 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.993955 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.994015 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.994033 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.994057 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:08 crc kubenswrapper[4900]: I1202 13:44:08.994076 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:08Z","lastTransitionTime":"2025-12-02T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.097351 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.097429 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.097452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.097481 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.097503 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.201989 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.202058 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.202077 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.202106 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.202127 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.305129 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.305201 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.305222 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.305256 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.305281 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.408863 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.408942 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.408961 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.408991 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.409012 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.512245 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.512311 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.512324 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.512349 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.512362 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.615582 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.615670 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.615689 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.615713 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.615736 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.718998 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.719069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.719091 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.719123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.719147 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.822725 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.823100 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.823334 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.827528 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.827738 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.909029 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:09 crc kubenswrapper[4900]: E1202 13:44:09.909522 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.931351 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.931419 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.931438 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.931462 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:09 crc kubenswrapper[4900]: I1202 13:44:09.931485 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:09Z","lastTransitionTime":"2025-12-02T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.035987 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.036532 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.036743 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.036920 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.037099 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.141170 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.141260 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.141279 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.141306 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.141325 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.245229 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.245295 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.245314 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.245341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.245362 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.348561 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.348628 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.348677 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.348709 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.348732 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.451973 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.452068 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.452095 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.452130 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.452152 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.555947 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.556005 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.556023 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.556051 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.556069 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.659286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.659349 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.659367 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.659392 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.659411 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.763675 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.763732 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.763750 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.763776 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.763796 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.867069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.867123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.867135 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.867154 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.867167 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.909455 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.909506 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:10 crc kubenswrapper[4900]: E1202 13:44:10.909849 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:10 crc kubenswrapper[4900]: E1202 13:44:10.909944 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.910250 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:10 crc kubenswrapper[4900]: E1202 13:44:10.910691 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.969711 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.969766 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.969783 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.969807 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:10 crc kubenswrapper[4900]: I1202 13:44:10.969829 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:10Z","lastTransitionTime":"2025-12-02T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.072821 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.072913 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.072934 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.072964 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.072989 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.176104 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.177215 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.177379 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.177519 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.177732 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.280664 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.280710 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.280722 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.280740 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.280753 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.383680 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.384131 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.384264 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.384390 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.384532 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.488165 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.488239 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.488261 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.488288 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.488307 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.591300 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.591362 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.591377 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.591399 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.591413 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.696242 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.696316 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.696341 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.696390 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.696419 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.799686 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.800045 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.800213 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.800378 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.800522 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.904542 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.904957 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.905114 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.905263 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.905421 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:11Z","lastTransitionTime":"2025-12-02T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.910183 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:11 crc kubenswrapper[4900]: E1202 13:44:11.911208 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:11 crc kubenswrapper[4900]: I1202 13:44:11.911540 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:44:11 crc kubenswrapper[4900]: E1202 13:44:11.911813 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.008623 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.008703 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.008722 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.008746 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.008765 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.112251 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.112317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.112335 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.112359 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.112377 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.215672 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.215758 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.215779 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.215818 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.215839 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.319459 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.319566 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.319590 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.319622 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.319674 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.423203 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.423273 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.423290 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.423316 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.423337 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.527078 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.527167 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.527188 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.527216 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.527238 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.630065 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.630112 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.630124 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.630142 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.630155 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.733270 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.733328 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.733345 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.733366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.733384 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.836572 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.836629 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.836679 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.836705 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.836723 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.909500 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.909552 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:12 crc kubenswrapper[4900]: E1202 13:44:12.909708 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.909500 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:12 crc kubenswrapper[4900]: E1202 13:44:12.909838 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:12 crc kubenswrapper[4900]: E1202 13:44:12.910181 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.939113 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.939168 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.939186 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.939208 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:12 crc kubenswrapper[4900]: I1202 13:44:12.939226 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:12Z","lastTransitionTime":"2025-12-02T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.043572 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.043629 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.043665 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.043688 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.043705 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.149069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.149123 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.149141 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.149163 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.149182 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.251638 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.251720 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.251738 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.251762 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.251780 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.354759 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.354893 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.354912 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.354938 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.354957 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.461293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.461405 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.461443 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.461485 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.461510 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.564087 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.564138 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.564152 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.564171 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.564182 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.668080 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.668150 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.668170 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.668197 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.668216 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.771894 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.772271 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.772289 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.772317 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.772336 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.876094 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.876204 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.876226 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.876264 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.876292 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.909618 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:13 crc kubenswrapper[4900]: E1202 13:44:13.909865 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.979241 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.979308 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.979328 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.979355 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:13 crc kubenswrapper[4900]: I1202 13:44:13.979374 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:13Z","lastTransitionTime":"2025-12-02T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.083111 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.083213 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.083242 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.083286 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.083318 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.186745 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.186837 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.186865 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.186902 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.186927 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.290456 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.290528 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.290547 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.290578 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.290604 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.393868 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.393915 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.393927 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.393944 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.393959 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.496405 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.496447 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.496460 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.496476 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.496487 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.600222 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.600299 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.600324 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.600358 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.600384 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.703888 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.703956 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.703973 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.703998 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.704017 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.807248 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.807315 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.807333 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.807360 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.807382 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.909094 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.909115 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:14 crc kubenswrapper[4900]: E1202 13:44:14.909482 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.909595 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:14 crc kubenswrapper[4900]: E1202 13:44:14.910504 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:14 crc kubenswrapper[4900]: E1202 13:44:14.910555 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.917293 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.917362 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.917384 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.917441 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.917462 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:14Z","lastTransitionTime":"2025-12-02T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:14 crc kubenswrapper[4900]: I1202 13:44:14.996729 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.996698643 podStartE2EDuration="16.996698643s" podCreationTimestamp="2025-12-02 13:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:14.971422426 +0000 UTC m=+100.387236327" watchObservedRunningTime="2025-12-02 13:44:14.996698643 +0000 UTC m=+100.412512534" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.021623 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.021706 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.021725 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.021755 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.021776 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.056948 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.056870923 podStartE2EDuration="24.056870923s" podCreationTimestamp="2025-12-02 13:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.055425282 +0000 UTC m=+100.471239173" watchObservedRunningTime="2025-12-02 13:44:15.056870923 +0000 UTC m=+100.472684804" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.117533 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5x7v9" podStartSLOduration=79.117496597 podStartE2EDuration="1m19.117496597s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.116869789 +0000 UTC m=+100.532683680" watchObservedRunningTime="2025-12-02 13:44:15.117496597 +0000 UTC m=+100.533310488" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.127620 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.127748 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.127774 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.127806 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.127830 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.139379 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podStartSLOduration=79.139355045 podStartE2EDuration="1m19.139355045s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.139300624 +0000 UTC m=+100.555114495" watchObservedRunningTime="2025-12-02 13:44:15.139355045 +0000 UTC m=+100.555168936" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.169624 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ckvw2" podStartSLOduration=79.169588835 podStartE2EDuration="1m19.169588835s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.168768521 +0000 UTC m=+100.584582442" watchObservedRunningTime="2025-12-02 13:44:15.169588835 +0000 UTC m=+100.585402726" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.194896 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.194863482 podStartE2EDuration="1m20.194863482s" podCreationTimestamp="2025-12-02 13:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.1937494 +0000 UTC m=+100.609563281" watchObservedRunningTime="2025-12-02 13:44:15.194863482 +0000 UTC m=+100.610677383" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.230948 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.231049 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.231078 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.231110 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.231130 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.286513 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p8tll" podStartSLOduration=79.286476216 podStartE2EDuration="1m19.286476216s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.285819067 +0000 UTC m=+100.701632958" watchObservedRunningTime="2025-12-02 13:44:15.286476216 +0000 UTC m=+100.702290107" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.341971 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.342049 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.342069 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.342097 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.342118 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.344549 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.344528515 podStartE2EDuration="50.344528515s" podCreationTimestamp="2025-12-02 13:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.344062082 +0000 UTC m=+100.759875973" watchObservedRunningTime="2025-12-02 13:44:15.344528515 +0000 UTC m=+100.760342396" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.344941 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.344927057 podStartE2EDuration="1m22.344927057s" podCreationTimestamp="2025-12-02 13:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.316805378 +0000 UTC m=+100.732619269" watchObservedRunningTime="2025-12-02 13:44:15.344927057 +0000 UTC m=+100.760740938" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.391238 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-r8pv9" podStartSLOduration=79.391203208 podStartE2EDuration="1m19.391203208s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.390408275 +0000 UTC m=+100.806222136" watchObservedRunningTime="2025-12-02 13:44:15.391203208 +0000 UTC m=+100.807017099" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.408640 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsnck" podStartSLOduration=78.408615189 podStartE2EDuration="1m18.408615189s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:15.408259638 +0000 UTC m=+100.824073539" watchObservedRunningTime="2025-12-02 13:44:15.408615189 +0000 UTC m=+100.824429050" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.445450 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.445511 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.445526 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.445548 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.445562 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.549127 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.549199 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.549218 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.549246 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.549266 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.570106 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:15 crc kubenswrapper[4900]: E1202 13:44:15.570392 4900 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:15 crc kubenswrapper[4900]: E1202 13:44:15.570518 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs podName:1c63b5f6-db87-48a2-b87e-5442db707843 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:19.570483493 +0000 UTC m=+164.986297374 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs") pod "network-metrics-daemon-kzhwn" (UID: "1c63b5f6-db87-48a2-b87e-5442db707843") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.615366 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.615434 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.615452 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.615477 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.615495 4900 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:15Z","lastTransitionTime":"2025-12-02T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.688592 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm"] Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.689242 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.692265 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.693160 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.694560 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.694636 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.772808 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e557668-f981-4a7b-a478-5380342ae7d6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.772887 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e557668-f981-4a7b-a478-5380342ae7d6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.772910 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e557668-f981-4a7b-a478-5380342ae7d6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.772952 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e557668-f981-4a7b-a478-5380342ae7d6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.772975 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e557668-f981-4a7b-a478-5380342ae7d6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.874951 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e557668-f981-4a7b-a478-5380342ae7d6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.875132 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e557668-f981-4a7b-a478-5380342ae7d6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.875187 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e557668-f981-4a7b-a478-5380342ae7d6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.875287 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e557668-f981-4a7b-a478-5380342ae7d6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.875251 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e557668-f981-4a7b-a478-5380342ae7d6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.875340 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e557668-f981-4a7b-a478-5380342ae7d6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.875429 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e557668-f981-4a7b-a478-5380342ae7d6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.877299 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e557668-f981-4a7b-a478-5380342ae7d6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.884590 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e557668-f981-4a7b-a478-5380342ae7d6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.907266 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e557668-f981-4a7b-a478-5380342ae7d6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rl4zm\" (UID: \"1e557668-f981-4a7b-a478-5380342ae7d6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:15 crc kubenswrapper[4900]: I1202 13:44:15.909556 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:15 crc kubenswrapper[4900]: E1202 13:44:15.909809 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.010480 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.628066 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" event={"ID":"1e557668-f981-4a7b-a478-5380342ae7d6","Type":"ContainerStarted","Data":"ab5349b5baa8854e9f4928b1ba7620dcad92a94c81c6465fc843b6b18cb0e12d"} Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.628149 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" event={"ID":"1e557668-f981-4a7b-a478-5380342ae7d6","Type":"ContainerStarted","Data":"208a9cf764286a237a52c355d6e4953c365be06927ea77b9a10f511e89b29fb2"} Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.651489 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rl4zm" podStartSLOduration=80.651458029 podStartE2EDuration="1m20.651458029s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:16.650075299 +0000 UTC m=+102.065889180" watchObservedRunningTime="2025-12-02 13:44:16.651458029 +0000 UTC m=+102.067271920" Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.909699 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.909853 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:16 crc kubenswrapper[4900]: I1202 13:44:16.909987 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:16 crc kubenswrapper[4900]: E1202 13:44:16.909879 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:16 crc kubenswrapper[4900]: E1202 13:44:16.910073 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:16 crc kubenswrapper[4900]: E1202 13:44:16.910216 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:17 crc kubenswrapper[4900]: I1202 13:44:17.909579 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:17 crc kubenswrapper[4900]: E1202 13:44:17.909853 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:18 crc kubenswrapper[4900]: I1202 13:44:18.909218 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:18 crc kubenswrapper[4900]: I1202 13:44:18.909235 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:18 crc kubenswrapper[4900]: I1202 13:44:18.909280 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:18 crc kubenswrapper[4900]: E1202 13:44:18.909373 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:18 crc kubenswrapper[4900]: E1202 13:44:18.909534 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:18 crc kubenswrapper[4900]: E1202 13:44:18.909824 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:19 crc kubenswrapper[4900]: I1202 13:44:19.909414 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:19 crc kubenswrapper[4900]: E1202 13:44:19.909983 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:20 crc kubenswrapper[4900]: I1202 13:44:20.909544 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:20 crc kubenswrapper[4900]: I1202 13:44:20.909616 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:20 crc kubenswrapper[4900]: I1202 13:44:20.909725 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:20 crc kubenswrapper[4900]: E1202 13:44:20.909805 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:20 crc kubenswrapper[4900]: E1202 13:44:20.909972 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:20 crc kubenswrapper[4900]: E1202 13:44:20.910180 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:21 crc kubenswrapper[4900]: I1202 13:44:21.909828 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:21 crc kubenswrapper[4900]: E1202 13:44:21.910075 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:22 crc kubenswrapper[4900]: I1202 13:44:22.909155 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:22 crc kubenswrapper[4900]: I1202 13:44:22.909278 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:22 crc kubenswrapper[4900]: I1202 13:44:22.909313 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:22 crc kubenswrapper[4900]: E1202 13:44:22.910097 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:22 crc kubenswrapper[4900]: E1202 13:44:22.910218 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:22 crc kubenswrapper[4900]: E1202 13:44:22.910038 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:23 crc kubenswrapper[4900]: I1202 13:44:23.909696 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:23 crc kubenswrapper[4900]: E1202 13:44:23.909863 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:23 crc kubenswrapper[4900]: I1202 13:44:23.911081 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:44:23 crc kubenswrapper[4900]: E1202 13:44:23.911358 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:44:24 crc kubenswrapper[4900]: I1202 13:44:24.910020 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:24 crc kubenswrapper[4900]: I1202 13:44:24.910038 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:24 crc kubenswrapper[4900]: E1202 13:44:24.911932 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:24 crc kubenswrapper[4900]: I1202 13:44:24.912013 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:24 crc kubenswrapper[4900]: E1202 13:44:24.912139 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:24 crc kubenswrapper[4900]: E1202 13:44:24.912326 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:25 crc kubenswrapper[4900]: I1202 13:44:25.909887 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:25 crc kubenswrapper[4900]: E1202 13:44:25.910052 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:26 crc kubenswrapper[4900]: I1202 13:44:26.909095 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:26 crc kubenswrapper[4900]: I1202 13:44:26.909157 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:26 crc kubenswrapper[4900]: I1202 13:44:26.909253 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:26 crc kubenswrapper[4900]: E1202 13:44:26.909506 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:26 crc kubenswrapper[4900]: E1202 13:44:26.910176 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:26 crc kubenswrapper[4900]: E1202 13:44:26.910491 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:27 crc kubenswrapper[4900]: I1202 13:44:27.909442 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:27 crc kubenswrapper[4900]: E1202 13:44:27.910476 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:28 crc kubenswrapper[4900]: I1202 13:44:28.909885 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:28 crc kubenswrapper[4900]: I1202 13:44:28.909985 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:28 crc kubenswrapper[4900]: I1202 13:44:28.909995 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:28 crc kubenswrapper[4900]: E1202 13:44:28.910104 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:28 crc kubenswrapper[4900]: E1202 13:44:28.910283 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:28 crc kubenswrapper[4900]: E1202 13:44:28.910502 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.684426 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/1.log" Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.685376 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/0.log" Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.685471 4900 generic.go:334] "Generic (PLEG): container finished" podID="7cacd7d0-a1a1-4ea0-b918-a73c8220e500" containerID="ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8" exitCode=1 Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.685536 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerDied","Data":"ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8"} Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.685618 4900 scope.go:117] "RemoveContainer" containerID="7694f3643934df8a7be385d184a51faf3199894d3322622f10fa292fab8f2e2a" Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.686425 4900 scope.go:117] "RemoveContainer" containerID="ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8" Dec 02 13:44:29 crc kubenswrapper[4900]: E1202 13:44:29.688511 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-r8pv9_openshift-multus(7cacd7d0-a1a1-4ea0-b918-a73c8220e500)\"" pod="openshift-multus/multus-r8pv9" podUID="7cacd7d0-a1a1-4ea0-b918-a73c8220e500" Dec 02 13:44:29 crc kubenswrapper[4900]: I1202 13:44:29.909476 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:29 crc kubenswrapper[4900]: E1202 13:44:29.909710 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:30 crc kubenswrapper[4900]: I1202 13:44:30.692637 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/1.log" Dec 02 13:44:30 crc kubenswrapper[4900]: I1202 13:44:30.909130 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:30 crc kubenswrapper[4900]: I1202 13:44:30.909173 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:30 crc kubenswrapper[4900]: I1202 13:44:30.909337 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:30 crc kubenswrapper[4900]: E1202 13:44:30.909535 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:30 crc kubenswrapper[4900]: E1202 13:44:30.909723 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:30 crc kubenswrapper[4900]: E1202 13:44:30.910030 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:31 crc kubenswrapper[4900]: I1202 13:44:31.910079 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:31 crc kubenswrapper[4900]: E1202 13:44:31.910357 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:32 crc kubenswrapper[4900]: I1202 13:44:32.910009 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:32 crc kubenswrapper[4900]: I1202 13:44:32.910009 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:32 crc kubenswrapper[4900]: E1202 13:44:32.910242 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:32 crc kubenswrapper[4900]: I1202 13:44:32.910269 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:32 crc kubenswrapper[4900]: E1202 13:44:32.910379 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:32 crc kubenswrapper[4900]: E1202 13:44:32.910507 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:33 crc kubenswrapper[4900]: I1202 13:44:33.909008 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:33 crc kubenswrapper[4900]: E1202 13:44:33.909224 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:34 crc kubenswrapper[4900]: I1202 13:44:34.909217 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:34 crc kubenswrapper[4900]: I1202 13:44:34.909248 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:34 crc kubenswrapper[4900]: I1202 13:44:34.909315 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:34 crc kubenswrapper[4900]: E1202 13:44:34.911245 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:34 crc kubenswrapper[4900]: E1202 13:44:34.911581 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:34 crc kubenswrapper[4900]: E1202 13:44:34.912111 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:34 crc kubenswrapper[4900]: I1202 13:44:34.912503 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:44:34 crc kubenswrapper[4900]: E1202 13:44:34.912883 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-88rnd_openshift-ovn-kubernetes(338f7f04-2450-4efb-a2e7-3c0e13eb8998)\"" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" Dec 02 13:44:34 crc kubenswrapper[4900]: E1202 13:44:34.955161 4900 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 02 13:44:35 crc kubenswrapper[4900]: E1202 13:44:35.037503 4900 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:44:35 crc kubenswrapper[4900]: I1202 13:44:35.909977 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:35 crc kubenswrapper[4900]: E1202 13:44:35.910173 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:36 crc kubenswrapper[4900]: I1202 13:44:36.909563 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:36 crc kubenswrapper[4900]: I1202 13:44:36.909686 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:36 crc kubenswrapper[4900]: I1202 13:44:36.909740 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:36 crc kubenswrapper[4900]: E1202 13:44:36.909877 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:36 crc kubenswrapper[4900]: E1202 13:44:36.910062 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:36 crc kubenswrapper[4900]: E1202 13:44:36.910285 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:37 crc kubenswrapper[4900]: I1202 13:44:37.909889 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:37 crc kubenswrapper[4900]: E1202 13:44:37.910132 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:38 crc kubenswrapper[4900]: I1202 13:44:38.909858 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:38 crc kubenswrapper[4900]: I1202 13:44:38.909994 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:38 crc kubenswrapper[4900]: E1202 13:44:38.910110 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:38 crc kubenswrapper[4900]: I1202 13:44:38.910152 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:38 crc kubenswrapper[4900]: E1202 13:44:38.910307 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:38 crc kubenswrapper[4900]: E1202 13:44:38.910510 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:39 crc kubenswrapper[4900]: I1202 13:44:39.909909 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:39 crc kubenswrapper[4900]: E1202 13:44:39.910477 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:40 crc kubenswrapper[4900]: E1202 13:44:40.039163 4900 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:44:40 crc kubenswrapper[4900]: I1202 13:44:40.909195 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:40 crc kubenswrapper[4900]: I1202 13:44:40.909297 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:40 crc kubenswrapper[4900]: I1202 13:44:40.909450 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:40 crc kubenswrapper[4900]: E1202 13:44:40.909459 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:40 crc kubenswrapper[4900]: E1202 13:44:40.909819 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:40 crc kubenswrapper[4900]: E1202 13:44:40.909938 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:40 crc kubenswrapper[4900]: I1202 13:44:40.910002 4900 scope.go:117] "RemoveContainer" containerID="ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8" Dec 02 13:44:41 crc kubenswrapper[4900]: I1202 13:44:41.740932 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/1.log" Dec 02 13:44:41 crc kubenswrapper[4900]: I1202 13:44:41.741030 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerStarted","Data":"3f1d3316a23a35820d847ba051ae6244de8214e97b45c832a2f23ac699e8cf53"} Dec 02 13:44:41 crc kubenswrapper[4900]: I1202 13:44:41.909312 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:41 crc kubenswrapper[4900]: E1202 13:44:41.909517 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:42 crc kubenswrapper[4900]: I1202 13:44:42.909978 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:42 crc kubenswrapper[4900]: I1202 13:44:42.910035 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:42 crc kubenswrapper[4900]: I1202 13:44:42.910101 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:42 crc kubenswrapper[4900]: E1202 13:44:42.910223 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:42 crc kubenswrapper[4900]: E1202 13:44:42.910446 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:42 crc kubenswrapper[4900]: E1202 13:44:42.910711 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:43 crc kubenswrapper[4900]: I1202 13:44:43.909511 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:43 crc kubenswrapper[4900]: E1202 13:44:43.910256 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:44 crc kubenswrapper[4900]: I1202 13:44:44.909549 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:44 crc kubenswrapper[4900]: I1202 13:44:44.909632 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:44 crc kubenswrapper[4900]: I1202 13:44:44.909566 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:44 crc kubenswrapper[4900]: E1202 13:44:44.911420 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:44 crc kubenswrapper[4900]: E1202 13:44:44.911549 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:44 crc kubenswrapper[4900]: E1202 13:44:44.911693 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:45 crc kubenswrapper[4900]: E1202 13:44:45.040455 4900 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:44:45 crc kubenswrapper[4900]: I1202 13:44:45.909228 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:45 crc kubenswrapper[4900]: E1202 13:44:45.909467 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:46 crc kubenswrapper[4900]: I1202 13:44:46.910168 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:46 crc kubenswrapper[4900]: I1202 13:44:46.910264 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:46 crc kubenswrapper[4900]: I1202 13:44:46.910417 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:46 crc kubenswrapper[4900]: E1202 13:44:46.910402 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:46 crc kubenswrapper[4900]: E1202 13:44:46.910591 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:46 crc kubenswrapper[4900]: E1202 13:44:46.910853 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:47 crc kubenswrapper[4900]: I1202 13:44:47.909055 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:47 crc kubenswrapper[4900]: E1202 13:44:47.909300 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:47 crc kubenswrapper[4900]: I1202 13:44:47.910531 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.773427 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/3.log" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.776816 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerStarted","Data":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.777638 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.817101 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podStartSLOduration=112.8170706 podStartE2EDuration="1m52.8170706s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:44:48.814776326 +0000 UTC m=+134.230590177" watchObservedRunningTime="2025-12-02 13:44:48.8170706 +0000 UTC m=+134.232884491" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.909809 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.909903 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.909798 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:48 crc kubenswrapper[4900]: E1202 13:44:48.909963 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:48 crc kubenswrapper[4900]: E1202 13:44:48.910111 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:48 crc kubenswrapper[4900]: E1202 13:44:48.910217 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.997080 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kzhwn"] Dec 02 13:44:48 crc kubenswrapper[4900]: I1202 13:44:48.997312 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:48 crc kubenswrapper[4900]: E1202 13:44:48.997526 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:50 crc kubenswrapper[4900]: E1202 13:44:50.042362 4900 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:44:50 crc kubenswrapper[4900]: I1202 13:44:50.909591 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:50 crc kubenswrapper[4900]: I1202 13:44:50.909677 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:50 crc kubenswrapper[4900]: E1202 13:44:50.909987 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:50 crc kubenswrapper[4900]: I1202 13:44:50.909721 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:50 crc kubenswrapper[4900]: E1202 13:44:50.910163 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:50 crc kubenswrapper[4900]: I1202 13:44:50.909721 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:50 crc kubenswrapper[4900]: E1202 13:44:50.910366 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:50 crc kubenswrapper[4900]: E1202 13:44:50.911974 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:52 crc kubenswrapper[4900]: I1202 13:44:52.912023 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:52 crc kubenswrapper[4900]: I1202 13:44:52.912182 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:52 crc kubenswrapper[4900]: E1202 13:44:52.912306 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:52 crc kubenswrapper[4900]: I1202 13:44:52.912456 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:52 crc kubenswrapper[4900]: E1202 13:44:52.912700 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:52 crc kubenswrapper[4900]: E1202 13:44:52.912858 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:52 crc kubenswrapper[4900]: I1202 13:44:52.913827 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:52 crc kubenswrapper[4900]: E1202 13:44:52.914114 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:54 crc kubenswrapper[4900]: I1202 13:44:54.910913 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:54 crc kubenswrapper[4900]: I1202 13:44:54.913042 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:54 crc kubenswrapper[4900]: E1202 13:44:54.913034 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:54 crc kubenswrapper[4900]: I1202 13:44:54.913101 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:54 crc kubenswrapper[4900]: I1202 13:44:54.913157 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:54 crc kubenswrapper[4900]: E1202 13:44:54.913374 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:54 crc kubenswrapper[4900]: E1202 13:44:54.913548 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:54 crc kubenswrapper[4900]: E1202 13:44:54.913686 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kzhwn" podUID="1c63b5f6-db87-48a2-b87e-5442db707843" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.200498 4900 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.249990 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xcml8"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.251421 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.252594 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.253518 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.256936 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.257546 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.261026 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4s6cc"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.261926 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.262394 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.263087 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.264394 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.264640 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.264749 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.264704 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.264872 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.265245 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.265685 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.265757 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.265693 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.266137 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.266721 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.267048 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.267297 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.267473 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.267575 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.267976 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mq2gm"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.268803 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.271526 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.272165 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.275246 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.277715 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rl4bn"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.278397 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.280418 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.281421 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.281876 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282624 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282685 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282809 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282887 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282890 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282925 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.282813 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.289054 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.289833 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-87drk"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.290251 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.290499 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.293093 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.295729 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.296860 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.299345 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dbc7k"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.299992 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.305174 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5mkrz"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.308230 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.309452 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.309918 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.310438 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.310675 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.316843 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.318686 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.319756 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.319854 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.320055 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.321590 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.326656 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.326738 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.327140 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.327595 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.328354 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xffzf"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.329127 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.329165 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.329749 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.329842 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.330372 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.330533 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.330663 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.330811 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.330988 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331133 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331236 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331316 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331406 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331438 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331553 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331658 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331785 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331849 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331911 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331562 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.331608 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.332016 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.332028 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.332515 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8d5sp"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.332147 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.332599 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.333200 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.329440 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.338049 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.338278 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.338382 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.339200 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.339408 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.339712 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.340893 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.340959 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.341055 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.341149 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.341264 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.341550 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.341761 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.341986 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.342003 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.342129 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.342391 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.345570 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5wqjj"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.346072 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vrdh8"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.346236 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.346458 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.356215 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.356245 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.356493 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.356771 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.356978 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-etcd-serving-ca\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357009 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357045 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357080 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5ww\" (UniqueName: \"kubernetes.io/projected/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-kube-api-access-8g5ww\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357104 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0d1a0ea-5032-423f-ac08-c236f60fea7f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357123 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aaa314e8-a902-4ab4-85ad-550d03c8a91d-images\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357148 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv4p2\" (UniqueName: \"kubernetes.io/projected/a0d1a0ea-5032-423f-ac08-c236f60fea7f-kube-api-access-lv4p2\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357171 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-encryption-config\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357187 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357206 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa314e8-a902-4ab4-85ad-550d03c8a91d-config\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357224 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aaa314e8-a902-4ab4-85ad-550d03c8a91d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357242 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-config\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357259 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d34e4093-59b0-4aba-b254-5671e760b208-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5n647\" (UID: \"d34e4093-59b0-4aba-b254-5671e760b208\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357286 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913a77-057b-4aab-b923-97ce7871c010-config\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357308 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq6gn\" (UniqueName: \"kubernetes.io/projected/d34e4093-59b0-4aba-b254-5671e760b208-kube-api-access-jq6gn\") pod \"cluster-samples-operator-665b6dd947-5n647\" (UID: \"d34e4093-59b0-4aba-b254-5671e760b208\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357335 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3bb063e9-c943-4b94-9196-80357b0fd832-machine-approver-tls\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357351 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35913a77-057b-4aab-b923-97ce7871c010-trusted-ca\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357369 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-audit\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357385 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-serving-cert\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357405 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72j5s\" (UniqueName: \"kubernetes.io/projected/aaa314e8-a902-4ab4-85ad-550d03c8a91d-kube-api-access-72j5s\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357423 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-image-import-ca\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357439 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357458 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb063e9-c943-4b94-9196-80357b0fd832-config\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357474 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0d1a0ea-5032-423f-ac08-c236f60fea7f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357492 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0d1a0ea-5032-423f-ac08-c236f60fea7f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357512 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bb063e9-c943-4b94-9196-80357b0fd832-auth-proxy-config\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357529 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52279fab-53ca-41cf-8370-bbc4821be6c2-node-pullsecrets\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357546 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8z6n\" (UniqueName: \"kubernetes.io/projected/52279fab-53ca-41cf-8370-bbc4821be6c2-kube-api-access-z8z6n\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357565 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqn8\" (UniqueName: \"kubernetes.io/projected/3bb063e9-c943-4b94-9196-80357b0fd832-kube-api-access-ztqn8\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357588 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913a77-057b-4aab-b923-97ce7871c010-serving-cert\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357610 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-etcd-client\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357636 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5t7\" (UniqueName: \"kubernetes.io/projected/35913a77-057b-4aab-b923-97ce7871c010-kube-api-access-zq5t7\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357674 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/52279fab-53ca-41cf-8370-bbc4821be6c2-audit-dir\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.357914 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.358725 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.359099 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.359517 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.360246 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.366229 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.366810 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.366866 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.366918 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.386290 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.387725 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.388347 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.388938 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.389173 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.393196 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-skddd"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.403253 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.414883 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.416405 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.420249 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.420630 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.420911 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.421535 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.421693 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.427482 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.429895 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.430364 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.430415 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.432294 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.447337 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.447519 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.448948 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kd2ql"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.447872 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.448028 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.449789 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.450066 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5z2g6"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.451306 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.451830 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.452306 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.452577 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.452636 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.453031 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.453261 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458083 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458505 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/52279fab-53ca-41cf-8370-bbc4821be6c2-audit-dir\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458547 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-etcd-serving-ca\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458577 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458594 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458657 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5ww\" (UniqueName: \"kubernetes.io/projected/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-kube-api-access-8g5ww\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458668 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/52279fab-53ca-41cf-8370-bbc4821be6c2-audit-dir\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458682 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0d1a0ea-5032-423f-ac08-c236f60fea7f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458709 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aaa314e8-a902-4ab4-85ad-550d03c8a91d-images\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458731 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv4p2\" (UniqueName: \"kubernetes.io/projected/a0d1a0ea-5032-423f-ac08-c236f60fea7f-kube-api-access-lv4p2\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458756 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa314e8-a902-4ab4-85ad-550d03c8a91d-config\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458777 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-encryption-config\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458794 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458819 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913a77-057b-4aab-b923-97ce7871c010-config\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458834 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aaa314e8-a902-4ab4-85ad-550d03c8a91d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458852 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-config\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458874 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d34e4093-59b0-4aba-b254-5671e760b208-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5n647\" (UID: \"d34e4093-59b0-4aba-b254-5671e760b208\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458896 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq6gn\" (UniqueName: \"kubernetes.io/projected/d34e4093-59b0-4aba-b254-5671e760b208-kube-api-access-jq6gn\") pod \"cluster-samples-operator-665b6dd947-5n647\" (UID: \"d34e4093-59b0-4aba-b254-5671e760b208\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458932 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3bb063e9-c943-4b94-9196-80357b0fd832-machine-approver-tls\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458956 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35913a77-057b-4aab-b923-97ce7871c010-trusted-ca\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.458979 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72j5s\" (UniqueName: \"kubernetes.io/projected/aaa314e8-a902-4ab4-85ad-550d03c8a91d-kube-api-access-72j5s\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459000 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-audit\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459022 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-serving-cert\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459042 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-image-import-ca\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459046 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459061 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459079 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb063e9-c943-4b94-9196-80357b0fd832-config\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459096 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0d1a0ea-5032-423f-ac08-c236f60fea7f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459116 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0d1a0ea-5032-423f-ac08-c236f60fea7f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459144 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bb063e9-c943-4b94-9196-80357b0fd832-auth-proxy-config\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459167 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52279fab-53ca-41cf-8370-bbc4821be6c2-node-pullsecrets\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459187 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8z6n\" (UniqueName: \"kubernetes.io/projected/52279fab-53ca-41cf-8370-bbc4821be6c2-kube-api-access-z8z6n\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459207 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqn8\" (UniqueName: \"kubernetes.io/projected/3bb063e9-c943-4b94-9196-80357b0fd832-kube-api-access-ztqn8\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459224 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913a77-057b-4aab-b923-97ce7871c010-serving-cert\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459244 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5t7\" (UniqueName: \"kubernetes.io/projected/35913a77-057b-4aab-b923-97ce7871c010-kube-api-access-zq5t7\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459264 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-etcd-client\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459430 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-etcd-serving-ca\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.459898 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-audit\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.460163 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dtq92"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.460296 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.460445 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913a77-057b-4aab-b923-97ce7871c010-config\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.461011 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.461365 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.461575 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0d1a0ea-5032-423f-ac08-c236f60fea7f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.461583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35913a77-057b-4aab-b923-97ce7871c010-trusted-ca\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.461702 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xcml8"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.462159 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb063e9-c943-4b94-9196-80357b0fd832-config\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.462581 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.462966 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aaa314e8-a902-4ab4-85ad-550d03c8a91d-images\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.463013 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/52279fab-53ca-41cf-8370-bbc4821be6c2-node-pullsecrets\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.463069 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.463251 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bb063e9-c943-4b94-9196-80357b0fd832-auth-proxy-config\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.463487 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-config\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.463867 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.463878 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.467084 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.467582 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgtk7"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.467997 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.468136 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.468882 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.472813 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mrs5b"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.473823 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.474214 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.474711 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.475231 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.475378 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.478587 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0d1a0ea-5032-423f-ac08-c236f60fea7f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.478629 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.478672 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-87drk"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.478689 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa314e8-a902-4ab4-85ad-550d03c8a91d-config\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.479066 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-serving-cert\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.479580 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/52279fab-53ca-41cf-8370-bbc4821be6c2-image-import-ca\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.479688 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-etcd-client\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.479843 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.480268 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913a77-057b-4aab-b923-97ce7871c010-serving-cert\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.480301 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.480760 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.481089 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d34e4093-59b0-4aba-b254-5671e760b208-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5n647\" (UID: \"d34e4093-59b0-4aba-b254-5671e760b208\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.481482 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/52279fab-53ca-41cf-8370-bbc4821be6c2-encryption-config\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.481604 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mq2gm"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.484163 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aaa314e8-a902-4ab4-85ad-550d03c8a91d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.484210 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.484264 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3bb063e9-c943-4b94-9196-80357b0fd832-machine-approver-tls\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.485127 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.485452 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-77dgs"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.488074 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5wqjj"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.488158 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.488527 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.490125 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.491735 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-skddd"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.492023 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.493082 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kd2ql"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.493321 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.494735 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xffzf"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.495319 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.496382 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5mkrz"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.497428 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8d5sp"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.498680 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.499549 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.500504 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rl4bn"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.501560 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4s6cc"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.502461 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7cgf2"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.503323 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.503509 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xtsj2"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.505056 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.507795 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dbc7k"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.508621 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vrdh8"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.511406 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.512266 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.516845 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mrs5b"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.519383 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.520365 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dtq92"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.521308 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.521451 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.522357 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.523410 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.524463 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7cgf2"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.525909 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.526944 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgtk7"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.528120 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xtsj2"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.529168 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.530212 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.531272 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-77dgs"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.532340 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.533406 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.533401 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lpm4d"] Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.534219 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.553431 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.594270 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.613033 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.634804 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.654141 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661392 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-service-ca-bundle\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661439 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8v22\" (UniqueName: \"kubernetes.io/projected/65ea3056-b990-4a94-a5aa-56a2a0f24b92-kube-api-access-j8v22\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661468 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-client-ca\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661496 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46fb\" (UniqueName: \"kubernetes.io/projected/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-kube-api-access-m46fb\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661541 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ea3056-b990-4a94-a5aa-56a2a0f24b92-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661602 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-serving-cert\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661653 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661674 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-encryption-config\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661696 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff406d69-c78d-478d-947c-c1b9ae6ae503-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661714 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661732 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff406d69-c78d-478d-947c-c1b9ae6ae503-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661752 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-policies\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661772 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5276c97-9e84-4632-98dc-43d3d4c1fefd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661845 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-config\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.661898 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72dd04d-bb06-4b3a-9f08-d68072239bd8-serving-cert\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662081 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662261 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662431 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mxsd\" (UniqueName: \"kubernetes.io/projected/a72dd04d-bb06-4b3a-9f08-d68072239bd8-kube-api-access-5mxsd\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662492 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-etcd-client\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662560 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5276c97-9e84-4632-98dc-43d3d4c1fefd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662613 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662687 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662762 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662816 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-config\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662857 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662952 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.662989 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqnsp\" (UniqueName: \"kubernetes.io/projected/eb44d009-5920-4606-aba3-aaf7104b1a22-kube-api-access-bqnsp\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663138 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtlwm\" (UniqueName: \"kubernetes.io/projected/c782003f-e8d3-4aa5-aba6-0db2706d4e43-kube-api-access-xtlwm\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663206 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c782003f-e8d3-4aa5-aba6-0db2706d4e43-serving-cert\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663246 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c782003f-e8d3-4aa5-aba6-0db2706d4e43-available-featuregates\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663280 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-audit-dir\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663324 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlpc9\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-kube-api-access-tlpc9\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663365 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663408 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663451 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-tls\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663482 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663527 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663571 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-trusted-ca\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663607 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663677 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-certificates\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663718 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-config\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663750 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-dir\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663809 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsk5\" (UniqueName: \"kubernetes.io/projected/e5276c97-9e84-4632-98dc-43d3d4c1fefd-kube-api-access-mdsk5\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663849 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663897 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-bound-sa-token\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663943 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.663991 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-serving-cert\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.664049 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcc7c\" (UniqueName: \"kubernetes.io/projected/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-kube-api-access-vcc7c\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: E1202 13:44:56.664375 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.164047539 +0000 UTC m=+142.579861430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.664451 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-client-ca\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.664561 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-audit-policies\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.674339 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.694466 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.715083 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.734564 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.754887 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.765306 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.765552 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqnsp\" (UniqueName: \"kubernetes.io/projected/eb44d009-5920-4606-aba3-aaf7104b1a22-kube-api-access-bqnsp\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: E1202 13:44:56.765610 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.265566024 +0000 UTC m=+142.681379915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.765717 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b29ce520-0853-4925-9974-165b2a41bcfa-tmpfs\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.765797 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c782003f-e8d3-4aa5-aba6-0db2706d4e43-available-featuregates\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.765845 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789g6\" (UniqueName: \"kubernetes.io/projected/3040cbe6-2783-43e2-9786-89fb91444b8e-kube-api-access-789g6\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.765885 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766106 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766140 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpc7q\" (UniqueName: \"kubernetes.io/projected/c6ae55d7-1773-484c-9657-a6438f072dee-kube-api-access-cpc7q\") pod \"downloads-7954f5f757-5wqjj\" (UID: \"c6ae55d7-1773-484c-9657-a6438f072dee\") " pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766176 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-tls\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766210 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e8357b0-15fc-4b14-87b5-fdd058c316f3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766255 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t9nf\" (UniqueName: \"kubernetes.io/projected/0e8357b0-15fc-4b14-87b5-fdd058c316f3-kube-api-access-5t9nf\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766289 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/725a6e19-5648-4f21-8405-1b6f29d6e9be-metrics-tls\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766392 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766428 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b725394f-0913-4f97-b61e-6906b21741be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766448 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c782003f-e8d3-4aa5-aba6-0db2706d4e43-available-featuregates\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766464 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766499 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-cabundle\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766532 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-dir\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766628 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/535396d3-f3c8-4175-a498-526e02960674-secret-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766696 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2p5\" (UniqueName: \"kubernetes.io/projected/725a6e19-5648-4f21-8405-1b6f29d6e9be-kube-api-access-js2p5\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-serving-cert\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766757 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-dir\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766776 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.766981 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98bca191-5b30-464c-89ad-01df623a1728-srv-cert\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767087 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8qv\" (UniqueName: \"kubernetes.io/projected/95616fe1-4979-433d-afce-3235d5dab8a5-kube-api-access-vc8qv\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767119 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn7gg\" (UniqueName: \"kubernetes.io/projected/de46dffd-919a-4df1-9d52-cbf1d14b8205-kube-api-access-jn7gg\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767166 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526979fe-578d-44b2-b8af-b02c7d712f7a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767196 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-bound-sa-token\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767221 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-serving-cert\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767277 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-client-ca\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767304 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-audit-policies\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767306 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767353 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7k7\" (UniqueName: \"kubernetes.io/projected/535396d3-f3c8-4175-a498-526e02960674-kube-api-access-gc7k7\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767371 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767386 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-registration-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-srv-cert\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767461 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbx2g\" (UniqueName: \"kubernetes.io/projected/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-kube-api-access-lbx2g\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767491 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-client-ca\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767515 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3df749bb-0f54-4f5b-b9b9-cf46babaf698-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767556 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3a2685-a14d-4ffe-8f76-55de65b5841b-config\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767610 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c62ac3-afa3-4940-83b5-7f071a231367-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767717 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mww9c\" (UniqueName: \"kubernetes.io/projected/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-kube-api-access-mww9c\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767741 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767793 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95wxv\" (UniqueName: \"kubernetes.io/projected/f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f-kube-api-access-95wxv\") pod \"multus-admission-controller-857f4d67dd-dtq92\" (UID: \"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.767815 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-default-certificate\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.768449 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/725a6e19-5648-4f21-8405-1b6f29d6e9be-config-volume\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.768487 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2zz\" (UniqueName: \"kubernetes.io/projected/d8c62ac3-afa3-4940-83b5-7f071a231367-kube-api-access-zc2zz\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.768554 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-service-ca\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.768628 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-plugins-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.768763 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-policies\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769024 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-config\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.768733 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-audit-policies\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769101 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72dd04d-bb06-4b3a-9f08-d68072239bd8-serving-cert\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769191 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769268 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-node-bootstrap-token\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769329 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-config\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769361 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-client\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769440 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e8357b0-15fc-4b14-87b5-fdd058c316f3-proxy-tls\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769529 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-proxy-tls\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769597 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9bd8a2c-57b4-40b4-b931-16496b5236a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n7qvb\" (UID: \"c9bd8a2c-57b4-40b4-b931-16496b5236a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769634 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769707 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b29ce520-0853-4925-9974-165b2a41bcfa-webhook-cert\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769777 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-key\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769790 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-client-ca\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769847 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3adbee-6d24-4396-a6a8-dfd4e5255627-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q5tmt\" (UID: \"ab3adbee-6d24-4396-a6a8-dfd4e5255627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769889 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df749bb-0f54-4f5b-b9b9-cf46babaf698-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.769965 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2khs\" (UniqueName: \"kubernetes.io/projected/64482a60-f5ce-47c1-9389-3945ebe3087d-kube-api-access-v2khs\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770036 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dtq92\" (UID: \"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770070 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-stats-auth\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770138 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svzvc\" (UniqueName: \"kubernetes.io/projected/629226c0-c1d6-4d74-a041-7eb24832256f-kube-api-access-svzvc\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770215 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-policies\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770815 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770979 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-config\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.770340 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771102 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-config\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771148 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771199 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-socket-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771216 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771244 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771311 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtlwm\" (UniqueName: \"kubernetes.io/projected/c782003f-e8d3-4aa5-aba6-0db2706d4e43-kube-api-access-xtlwm\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771336 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-audit-dir\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771360 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98bca191-5b30-464c-89ad-01df623a1728-profile-collector-cert\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771382 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-certs\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771426 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df749bb-0f54-4f5b-b9b9-cf46babaf698-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771471 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c782003f-e8d3-4aa5-aba6-0db2706d4e43-serving-cert\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771496 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526979fe-578d-44b2-b8af-b02c7d712f7a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771501 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-audit-dir\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771533 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlpc9\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-kube-api-access-tlpc9\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771608 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771670 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-trusted-ca\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771710 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-certificates\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771735 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-config\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771808 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzpp\" (UniqueName: \"kubernetes.io/projected/87d7155d-17a2-4191-8a85-9b6277641b28-kube-api-access-qdzpp\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771838 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7k6w\" (UniqueName: \"kubernetes.io/projected/526979fe-578d-44b2-b8af-b02c7d712f7a-kube-api-access-c7k6w\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771877 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64482a60-f5ce-47c1-9389-3945ebe3087d-serving-cert\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771913 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpj9\" (UniqueName: \"kubernetes.io/projected/98bca191-5b30-464c-89ad-01df623a1728-kube-api-access-8jpj9\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.771996 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdsk5\" (UniqueName: \"kubernetes.io/projected/e5276c97-9e84-4632-98dc-43d3d4c1fefd-kube-api-access-mdsk5\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.772688 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-serving-cert\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: E1202 13:44:56.773417 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.273394142 +0000 UTC m=+142.689208233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.773631 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.773738 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rwrg\" (UniqueName: \"kubernetes.io/projected/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-kube-api-access-5rwrg\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.773845 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzrw\" (UniqueName: \"kubernetes.io/projected/312ac034-6fc6-4ceb-bb05-56d80e07a205-kube-api-access-qdzrw\") pod \"dns-operator-744455d44c-8d5sp\" (UID: \"312ac034-6fc6-4ceb-bb05-56d80e07a205\") " pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.773949 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcc7c\" (UniqueName: \"kubernetes.io/projected/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-kube-api-access-vcc7c\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.774480 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-service-ca-bundle\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.774524 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b725394f-0913-4f97-b61e-6906b21741be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.774585 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sq8l\" (UniqueName: \"kubernetes.io/projected/4dda8c36-2a02-4199-a2e3-33ae4a218883-kube-api-access-6sq8l\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.775076 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8v22\" (UniqueName: \"kubernetes.io/projected/65ea3056-b990-4a94-a5aa-56a2a0f24b92-kube-api-access-j8v22\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.775186 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m46fb\" (UniqueName: \"kubernetes.io/projected/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-kube-api-access-m46fb\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.775303 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c62ac3-afa3-4940-83b5-7f071a231367-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.775441 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ea3056-b990-4a94-a5aa-56a2a0f24b92-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.776928 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.777159 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-client-ca\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.777316 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-trusted-ca\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.777320 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-config\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.777754 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-certificates\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.778122 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72dd04d-bb06-4b3a-9f08-d68072239bd8-serving-cert\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.778194 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-config\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.778234 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.778409 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-service-ca-bundle\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.780103 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-oauth-serving-cert\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.781524 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-tls\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.781535 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-serving-cert\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.780919 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782075 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-oauth-config\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782309 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782384 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-encryption-config\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782557 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlmv\" (UniqueName: \"kubernetes.io/projected/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-kube-api-access-9rlmv\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782622 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3a2685-a14d-4ffe-8f76-55de65b5841b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782699 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c782003f-e8d3-4aa5-aba6-0db2706d4e43-serving-cert\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782868 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff406d69-c78d-478d-947c-c1b9ae6ae503-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.782954 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b29ce520-0853-4925-9974-165b2a41bcfa-apiservice-cert\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.783020 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-metrics-certs\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.783076 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.783150 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghqv\" (UniqueName: \"kubernetes.io/projected/b29ce520-0853-4925-9974-165b2a41bcfa-kube-api-access-rghqv\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.783224 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64482a60-f5ce-47c1-9389-3945ebe3087d-config\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.784301 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3040cbe6-2783-43e2-9786-89fb91444b8e-serving-cert\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.784349 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-csi-data-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.784396 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff406d69-c78d-478d-947c-c1b9ae6ae503-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.784444 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrft\" (UniqueName: \"kubernetes.io/projected/c9bd8a2c-57b4-40b4-b931-16496b5236a0-kube-api-access-fqrft\") pod \"control-plane-machine-set-operator-78cbb6b69f-n7qvb\" (UID: \"c9bd8a2c-57b4-40b4-b931-16496b5236a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.784911 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-console-config\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.784960 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5276c97-9e84-4632-98dc-43d3d4c1fefd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785004 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca3a2685-a14d-4ffe-8f76-55de65b5841b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785144 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785238 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mxsd\" (UniqueName: \"kubernetes.io/projected/a72dd04d-bb06-4b3a-9f08-d68072239bd8-kube-api-access-5mxsd\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785277 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhkh\" (UniqueName: \"kubernetes.io/projected/ab3adbee-6d24-4396-a6a8-dfd4e5255627-kube-api-access-dkhkh\") pod \"package-server-manager-789f6589d5-q5tmt\" (UID: \"ab3adbee-6d24-4396-a6a8-dfd4e5255627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785323 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-etcd-client\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785362 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-trusted-ca-bundle\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.785836 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.788817 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff406d69-c78d-478d-947c-c1b9ae6ae503-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.789554 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5276c97-9e84-4632-98dc-43d3d4c1fefd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.789634 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ac034-6fc6-4ceb-bb05-56d80e07a205-metrics-tls\") pod \"dns-operator-744455d44c-8d5sp\" (UID: \"312ac034-6fc6-4ceb-bb05-56d80e07a205\") " pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.789790 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d7155d-17a2-4191-8a85-9b6277641b28-cert\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.789989 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.790043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.790087 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84m4c\" (UniqueName: \"kubernetes.io/projected/b3edd4c5-0d86-4f99-bb00-e9b134cda502-kube-api-access-84m4c\") pod \"migrator-59844c95c7-6pj4t\" (UID: \"b3edd4c5-0d86-4f99-bb00-e9b134cda502\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.790581 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.790750 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e8357b0-15fc-4b14-87b5-fdd058c316f3-images\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.790811 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-ca\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.791065 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/629226c0-c1d6-4d74-a041-7eb24832256f-service-ca-bundle\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.791140 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.791235 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-service-ca\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.791300 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b725394f-0913-4f97-b61e-6906b21741be-config\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.791342 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-mountpoint-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.791358 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5276c97-9e84-4632-98dc-43d3d4c1fefd-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.793104 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-serving-cert\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.794038 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.794992 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.794999 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.795056 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5276c97-9e84-4632-98dc-43d3d4c1fefd-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.795952 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.796129 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.796848 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.799990 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-etcd-client\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.800139 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff406d69-c78d-478d-947c-c1b9ae6ae503-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.802541 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.803977 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ea3056-b990-4a94-a5aa-56a2a0f24b92-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.804830 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-encryption-config\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.817301 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.834582 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.854606 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.875385 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.893184 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:56 crc kubenswrapper[4900]: E1202 13:44:56.893707 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.393569609 +0000 UTC m=+142.809383500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.893864 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9bd8a2c-57b4-40b4-b931-16496b5236a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n7qvb\" (UID: \"c9bd8a2c-57b4-40b4-b931-16496b5236a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.893984 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894071 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-key\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894145 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3adbee-6d24-4396-a6a8-dfd4e5255627-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q5tmt\" (UID: \"ab3adbee-6d24-4396-a6a8-dfd4e5255627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894214 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b29ce520-0853-4925-9974-165b2a41bcfa-webhook-cert\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894255 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2khs\" (UniqueName: \"kubernetes.io/projected/64482a60-f5ce-47c1-9389-3945ebe3087d-kube-api-access-v2khs\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dtq92\" (UID: \"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894832 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-stats-auth\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894885 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.894955 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svzvc\" (UniqueName: \"kubernetes.io/projected/629226c0-c1d6-4d74-a041-7eb24832256f-kube-api-access-svzvc\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895034 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df749bb-0f54-4f5b-b9b9-cf46babaf698-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895106 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895142 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-socket-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895286 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df749bb-0f54-4f5b-b9b9-cf46babaf698-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895340 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98bca191-5b30-464c-89ad-01df623a1728-profile-collector-cert\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895413 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-certs\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895499 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526979fe-578d-44b2-b8af-b02c7d712f7a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895515 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-socket-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895623 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895716 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzpp\" (UniqueName: \"kubernetes.io/projected/87d7155d-17a2-4191-8a85-9b6277641b28-kube-api-access-qdzpp\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895789 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7k6w\" (UniqueName: \"kubernetes.io/projected/526979fe-578d-44b2-b8af-b02c7d712f7a-kube-api-access-c7k6w\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.895891 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64482a60-f5ce-47c1-9389-3945ebe3087d-serving-cert\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: E1202 13:44:56.895992 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.395979916 +0000 UTC m=+142.811793777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896017 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpj9\" (UniqueName: \"kubernetes.io/projected/98bca191-5b30-464c-89ad-01df623a1728-kube-api-access-8jpj9\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896047 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rwrg\" (UniqueName: \"kubernetes.io/projected/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-kube-api-access-5rwrg\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzrw\" (UniqueName: \"kubernetes.io/projected/312ac034-6fc6-4ceb-bb05-56d80e07a205-kube-api-access-qdzrw\") pod \"dns-operator-744455d44c-8d5sp\" (UID: \"312ac034-6fc6-4ceb-bb05-56d80e07a205\") " pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896114 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b725394f-0913-4f97-b61e-6906b21741be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896154 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sq8l\" (UniqueName: \"kubernetes.io/projected/4dda8c36-2a02-4199-a2e3-33ae4a218883-kube-api-access-6sq8l\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896181 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c62ac3-afa3-4940-83b5-7f071a231367-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896210 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-oauth-serving-cert\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896234 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlmv\" (UniqueName: \"kubernetes.io/projected/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-kube-api-access-9rlmv\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896254 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-oauth-config\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896242 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df749bb-0f54-4f5b-b9b9-cf46babaf698-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896280 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b29ce520-0853-4925-9974-165b2a41bcfa-apiservice-cert\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896303 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3a2685-a14d-4ffe-8f76-55de65b5841b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.896620 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rghqv\" (UniqueName: \"kubernetes.io/projected/b29ce520-0853-4925-9974-165b2a41bcfa-kube-api-access-rghqv\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.897927 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-oauth-serving-cert\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.898047 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64482a60-f5ce-47c1-9389-3945ebe3087d-config\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.898384 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3040cbe6-2783-43e2-9786-89fb91444b8e-serving-cert\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.898458 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-metrics-certs\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899015 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrft\" (UniqueName: \"kubernetes.io/projected/c9bd8a2c-57b4-40b4-b931-16496b5236a0-kube-api-access-fqrft\") pod \"control-plane-machine-set-operator-78cbb6b69f-n7qvb\" (UID: \"c9bd8a2c-57b4-40b4-b931-16496b5236a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899100 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-console-config\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899138 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-csi-data-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899225 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca3a2685-a14d-4ffe-8f76-55de65b5841b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899319 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhkh\" (UniqueName: \"kubernetes.io/projected/ab3adbee-6d24-4396-a6a8-dfd4e5255627-kube-api-access-dkhkh\") pod \"package-server-manager-789f6589d5-q5tmt\" (UID: \"ab3adbee-6d24-4396-a6a8-dfd4e5255627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899384 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-trusted-ca-bundle\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899422 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d7155d-17a2-4191-8a85-9b6277641b28-cert\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899455 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ac034-6fc6-4ceb-bb05-56d80e07a205-metrics-tls\") pod \"dns-operator-744455d44c-8d5sp\" (UID: \"312ac034-6fc6-4ceb-bb05-56d80e07a205\") " pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899496 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84m4c\" (UniqueName: \"kubernetes.io/projected/b3edd4c5-0d86-4f99-bb00-e9b134cda502-kube-api-access-84m4c\") pod \"migrator-59844c95c7-6pj4t\" (UID: \"b3edd4c5-0d86-4f99-bb00-e9b134cda502\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899482 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-csi-data-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899564 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e8357b0-15fc-4b14-87b5-fdd058c316f3-images\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899596 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-ca\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899631 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/629226c0-c1d6-4d74-a041-7eb24832256f-service-ca-bundle\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899697 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-service-ca\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899761 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b725394f-0913-4f97-b61e-6906b21741be-config\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899793 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-mountpoint-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899841 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b29ce520-0853-4925-9974-165b2a41bcfa-tmpfs\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899879 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789g6\" (UniqueName: \"kubernetes.io/projected/3040cbe6-2783-43e2-9786-89fb91444b8e-kube-api-access-789g6\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899916 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpc7q\" (UniqueName: \"kubernetes.io/projected/c6ae55d7-1773-484c-9657-a6438f072dee-kube-api-access-cpc7q\") pod \"downloads-7954f5f757-5wqjj\" (UID: \"c6ae55d7-1773-484c-9657-a6438f072dee\") " pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899951 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e8357b0-15fc-4b14-87b5-fdd058c316f3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.899991 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t9nf\" (UniqueName: \"kubernetes.io/projected/0e8357b0-15fc-4b14-87b5-fdd058c316f3-kube-api-access-5t9nf\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900026 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b725394f-0913-4f97-b61e-6906b21741be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900062 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900094 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/725a6e19-5648-4f21-8405-1b6f29d6e9be-metrics-tls\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900131 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-cabundle\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900166 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900201 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js2p5\" (UniqueName: \"kubernetes.io/projected/725a6e19-5648-4f21-8405-1b6f29d6e9be-kube-api-access-js2p5\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900235 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-serving-cert\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900267 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/535396d3-f3c8-4175-a498-526e02960674-secret-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900304 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn7gg\" (UniqueName: \"kubernetes.io/projected/de46dffd-919a-4df1-9d52-cbf1d14b8205-kube-api-access-jn7gg\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900340 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98bca191-5b30-464c-89ad-01df623a1728-srv-cert\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900374 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc8qv\" (UniqueName: \"kubernetes.io/projected/95616fe1-4979-433d-afce-3235d5dab8a5-kube-api-access-vc8qv\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900409 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526979fe-578d-44b2-b8af-b02c7d712f7a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900463 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7k7\" (UniqueName: \"kubernetes.io/projected/535396d3-f3c8-4175-a498-526e02960674-kube-api-access-gc7k7\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900496 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-registration-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900530 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-srv-cert\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900552 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9bd8a2c-57b4-40b4-b931-16496b5236a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n7qvb\" (UID: \"c9bd8a2c-57b4-40b4-b931-16496b5236a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900567 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbx2g\" (UniqueName: \"kubernetes.io/projected/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-kube-api-access-lbx2g\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900866 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3df749bb-0f54-4f5b-b9b9-cf46babaf698-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900928 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3a2685-a14d-4ffe-8f76-55de65b5841b-config\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.900984 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c62ac3-afa3-4940-83b5-7f071a231367-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901041 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mww9c\" (UniqueName: \"kubernetes.io/projected/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-kube-api-access-mww9c\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901092 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901159 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95wxv\" (UniqueName: \"kubernetes.io/projected/f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f-kube-api-access-95wxv\") pod \"multus-admission-controller-857f4d67dd-dtq92\" (UID: \"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901210 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-default-certificate\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901261 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/725a6e19-5648-4f21-8405-1b6f29d6e9be-config-volume\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901315 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc2zz\" (UniqueName: \"kubernetes.io/projected/d8c62ac3-afa3-4940-83b5-7f071a231367-kube-api-access-zc2zz\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901364 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-service-ca\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901446 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-plugins-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901158 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-console-config\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901517 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-node-bootstrap-token\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901697 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-config\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901727 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-client\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901857 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-mountpoint-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901879 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e8357b0-15fc-4b14-87b5-fdd058c316f3-proxy-tls\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.902036 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-proxy-tls\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.902825 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-oauth-config\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.902946 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-trusted-ca-bundle\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.903267 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-plugins-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.901939 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4dda8c36-2a02-4199-a2e3-33ae4a218883-registration-dir\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.903487 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.904234 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e8357b0-15fc-4b14-87b5-fdd058c316f3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.904226 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df749bb-0f54-4f5b-b9b9-cf46babaf698-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.904530 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b29ce520-0853-4925-9974-165b2a41bcfa-tmpfs\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.904726 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e8357b0-15fc-4b14-87b5-fdd058c316f3-images\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.904855 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-service-ca\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.907611 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-serving-cert\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.908000 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/312ac034-6fc6-4ceb-bb05-56d80e07a205-metrics-tls\") pod \"dns-operator-744455d44c-8d5sp\" (UID: \"312ac034-6fc6-4ceb-bb05-56d80e07a205\") " pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.908570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e8357b0-15fc-4b14-87b5-fdd058c316f3-proxy-tls\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.909004 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.909083 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.909110 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.909267 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526979fe-578d-44b2-b8af-b02c7d712f7a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.909348 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.914435 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.916866 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526979fe-578d-44b2-b8af-b02c7d712f7a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.935927 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.955088 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.968271 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-proxy-tls\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.975579 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 13:44:56 crc kubenswrapper[4900]: I1202 13:44:56.996954 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.003337 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3040cbe6-2783-43e2-9786-89fb91444b8e-serving-cert\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.003299 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.003450 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.503426907 +0000 UTC m=+142.919240758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.005412 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.006097 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.506064431 +0000 UTC m=+142.921878312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.014066 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.033824 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.044233 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-config\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.053692 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.074549 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.088347 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-client\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.093713 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.104266 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-ca\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.106127 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.106317 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.606277729 +0000 UTC m=+143.022091610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.107567 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.108175 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.608132061 +0000 UTC m=+143.023945922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.114398 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.134204 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.144305 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3040cbe6-2783-43e2-9786-89fb91444b8e-etcd-service-ca\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.153565 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.174495 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.193989 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.208330 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.208486 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.708461303 +0000 UTC m=+143.124275164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.209700 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.210065 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.710052888 +0000 UTC m=+143.125866749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.214304 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.227242 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-default-certificate\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.233947 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.240498 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-stats-auth\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.253979 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.263090 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/629226c0-c1d6-4d74-a041-7eb24832256f-metrics-certs\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.274564 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.294035 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.310918 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.312265 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.812238361 +0000 UTC m=+143.228052242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.314499 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.324712 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/629226c0-c1d6-4d74-a041-7eb24832256f-service-ca-bundle\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.334282 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.342289 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3a2685-a14d-4ffe-8f76-55de65b5841b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.354724 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.375129 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.383695 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3a2685-a14d-4ffe-8f76-55de65b5841b-config\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.395030 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.413986 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.414591 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.414821 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.914791645 +0000 UTC m=+143.330605536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.427181 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b725394f-0913-4f97-b61e-6906b21741be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.434426 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.442903 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b725394f-0913-4f97-b61e-6906b21741be-config\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.454538 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.472286 4900 request.go:700] Waited for 1.018684469s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.474313 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.493913 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.508121 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98bca191-5b30-464c-89ad-01df623a1728-srv-cert\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.515001 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.515069 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.515401 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.015349974 +0000 UTC m=+143.431163865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.516497 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.517077 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.017045491 +0000 UTC m=+143.432859392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.520172 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98bca191-5b30-464c-89ad-01df623a1728-profile-collector-cert\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.527340 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/535396d3-f3c8-4175-a498-526e02960674-secret-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.529395 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.534237 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.580349 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.603259 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5ww\" (UniqueName: \"kubernetes.io/projected/6808a6ac-e5cb-44ae-a0a6-dfe555d727ac-kube-api-access-8g5ww\") pod \"ingress-operator-5b745b69d9-cjdbn\" (UID: \"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.615280 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.617451 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.617601 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.117560328 +0000 UTC m=+143.533374209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.618513 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.619089 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.11906855 +0000 UTC m=+143.534882441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.622938 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq6gn\" (UniqueName: \"kubernetes.io/projected/d34e4093-59b0-4aba-b254-5671e760b208-kube-api-access-jq6gn\") pod \"cluster-samples-operator-665b6dd947-5n647\" (UID: \"d34e4093-59b0-4aba-b254-5671e760b208\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.629469 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dtq92\" (UID: \"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.633933 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.647888 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-srv-cert\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.654679 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.698530 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.707627 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72j5s\" (UniqueName: \"kubernetes.io/projected/aaa314e8-a902-4ab4-85ad-550d03c8a91d-kube-api-access-72j5s\") pod \"machine-api-operator-5694c8668f-xcml8\" (UID: \"aaa314e8-a902-4ab4-85ad-550d03c8a91d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.712290 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.720183 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.720413 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.2203733 +0000 UTC m=+143.636187181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.721002 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.721515 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.221498901 +0000 UTC m=+143.637312792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.721872 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv4p2\" (UniqueName: \"kubernetes.io/projected/a0d1a0ea-5032-423f-ac08-c236f60fea7f-kube-api-access-lv4p2\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.744617 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5t7\" (UniqueName: \"kubernetes.io/projected/35913a77-057b-4aab-b923-97ce7871c010-kube-api-access-zq5t7\") pod \"console-operator-58897d9998-5mkrz\" (UID: \"35913a77-057b-4aab-b923-97ce7871c010\") " pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.760991 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8z6n\" (UniqueName: \"kubernetes.io/projected/52279fab-53ca-41cf-8370-bbc4821be6c2-kube-api-access-z8z6n\") pod \"apiserver-76f77b778f-4s6cc\" (UID: \"52279fab-53ca-41cf-8370-bbc4821be6c2\") " pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.775916 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.785934 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0d1a0ea-5032-423f-ac08-c236f60fea7f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7s7qs\" (UID: \"a0d1a0ea-5032-423f-ac08-c236f60fea7f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.794778 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.802061 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqn8\" (UniqueName: \"kubernetes.io/projected/3bb063e9-c943-4b94-9196-80357b0fd832-kube-api-access-ztqn8\") pod \"machine-approver-56656f9798-5t42f\" (UID: \"3bb063e9-c943-4b94-9196-80357b0fd832\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.808767 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3adbee-6d24-4396-a6a8-dfd4e5255627-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q5tmt\" (UID: \"ab3adbee-6d24-4396-a6a8-dfd4e5255627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.814079 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.823141 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.823449 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.323406537 +0000 UTC m=+143.739220428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.824070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.824784 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.324764815 +0000 UTC m=+143.740578696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.834178 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.844054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b29ce520-0853-4925-9974-165b2a41bcfa-webhook-cert\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.846064 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b29ce520-0853-4925-9974-165b2a41bcfa-apiservice-cert\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.852925 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.856413 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.860259 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.881341 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.889298 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.897748 4900 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.897805 4900 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.897837 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64482a60-f5ce-47c1-9389-3945ebe3087d-serving-cert podName:64482a60-f5ce-47c1-9389-3945ebe3087d nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.397810325 +0000 UTC m=+143.813624176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/64482a60-f5ce-47c1-9389-3945ebe3087d-serving-cert") pod "service-ca-operator-777779d784-wsrhj" (UID: "64482a60-f5ce-47c1-9389-3945ebe3087d") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.897939 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-certs podName:1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.397893607 +0000 UTC m=+143.813707668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-certs") pod "machine-config-server-lpm4d" (UID: "1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.898304 4900 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.898485 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume podName:535396d3-f3c8-4175-a498-526e02960674 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.398444103 +0000 UTC m=+143.814258144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume") pod "collect-profiles-29411370-bpgg9" (UID: "535396d3-f3c8-4175-a498-526e02960674") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.898311 4900 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.898620 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d8c62ac3-afa3-4940-83b5-7f071a231367-config podName:d8c62ac3-afa3-4940-83b5-7f071a231367 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.398605517 +0000 UTC m=+143.814419678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d8c62ac3-afa3-4940-83b5-7f071a231367-config") pod "openshift-controller-manager-operator-756b6f6bc6-ldz9n" (UID: "d8c62ac3-afa3-4940-83b5-7f071a231367") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.899554 4900 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.899716 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-key podName:4e4344cf-f5b8-49d4-91e1-9726ea4e6197 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.399634396 +0000 UTC m=+143.815448257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-key") pod "service-ca-9c57cc56f-mrs5b" (UID: "4e4344cf-f5b8-49d4-91e1-9726ea4e6197") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.899797 4900 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.899832 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/64482a60-f5ce-47c1-9389-3945ebe3087d-config podName:64482a60-f5ce-47c1-9389-3945ebe3087d nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.399822971 +0000 UTC m=+143.815636832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/64482a60-f5ce-47c1-9389-3945ebe3087d-config") pod "service-ca-operator-777779d784-wsrhj" (UID: "64482a60-f5ce-47c1-9389-3945ebe3087d") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.902878 4900 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.902932 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-node-bootstrap-token podName:1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.402916108 +0000 UTC m=+143.818730179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-node-bootstrap-token") pod "machine-config-server-lpm4d" (UID: "1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903245 4900 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903287 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/725a6e19-5648-4f21-8405-1b6f29d6e9be-config-volume podName:725a6e19-5648-4f21-8405-1b6f29d6e9be nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.403275098 +0000 UTC m=+143.819089189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/725a6e19-5648-4f21-8405-1b6f29d6e9be-config-volume") pod "dns-default-xtsj2" (UID: "725a6e19-5648-4f21-8405-1b6f29d6e9be") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903315 4900 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903341 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87d7155d-17a2-4191-8a85-9b6277641b28-cert podName:87d7155d-17a2-4191-8a85-9b6277641b28 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.403334059 +0000 UTC m=+143.819148160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87d7155d-17a2-4191-8a85-9b6277641b28-cert") pod "ingress-canary-7cgf2" (UID: "87d7155d-17a2-4191-8a85-9b6277641b28") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903369 4900 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903396 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/725a6e19-5648-4f21-8405-1b6f29d6e9be-metrics-tls podName:725a6e19-5648-4f21-8405-1b6f29d6e9be nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.403387891 +0000 UTC m=+143.819201982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/725a6e19-5648-4f21-8405-1b6f29d6e9be-metrics-tls") pod "dns-default-xtsj2" (UID: "725a6e19-5648-4f21-8405-1b6f29d6e9be") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903436 4900 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903466 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-cabundle podName:4e4344cf-f5b8-49d4-91e1-9726ea4e6197 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.403457583 +0000 UTC m=+143.819271684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-cabundle") pod "service-ca-9c57cc56f-mrs5b" (UID: "4e4344cf-f5b8-49d4-91e1-9726ea4e6197") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903519 4900 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903552 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca podName:de46dffd-919a-4df1-9d52-cbf1d14b8205 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.403541455 +0000 UTC m=+143.819355546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca") pod "marketplace-operator-79b997595-tgtk7" (UID: "de46dffd-919a-4df1-9d52-cbf1d14b8205") : failed to sync configmap cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903592 4900 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.903729 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c62ac3-afa3-4940-83b5-7f071a231367-serving-cert podName:d8c62ac3-afa3-4940-83b5-7f071a231367 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.40370236 +0000 UTC m=+143.819516411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d8c62ac3-afa3-4940-83b5-7f071a231367-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-ldz9n" (UID: "d8c62ac3-afa3-4940-83b5-7f071a231367") : failed to sync secret cache: timed out waiting for the condition Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.907343 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.916333 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.926479 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.926727 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.426693842 +0000 UTC m=+143.842507703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.927023 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:57 crc kubenswrapper[4900]: E1202 13:44:57.927796 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.427775912 +0000 UTC m=+143.843589773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.934343 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.947992 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.953810 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.973694 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.987218 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647"] Dec 02 13:44:57 crc kubenswrapper[4900]: I1202 13:44:57.995109 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.011260 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn"] Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.015955 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.021975 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.028961 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.029483 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.529428131 +0000 UTC m=+143.945241992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.029730 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.030182 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.530161771 +0000 UTC m=+143.945975622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.034706 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 13:44:58 crc kubenswrapper[4900]: W1202 13:44:58.038363 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6808a6ac_e5cb_44ae_a0a6_dfe555d727ac.slice/crio-a9219162c9a23fe406be5ad3bfe60a4ed7f4708fd4b0da6c598e1df44172b3d5 WatchSource:0}: Error finding container a9219162c9a23fe406be5ad3bfe60a4ed7f4708fd4b0da6c598e1df44172b3d5: Status 404 returned error can't find the container with id a9219162c9a23fe406be5ad3bfe60a4ed7f4708fd4b0da6c598e1df44172b3d5 Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.054326 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.076878 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.079389 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xcml8"] Dec 02 13:44:58 crc kubenswrapper[4900]: W1202 13:44:58.088005 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa314e8_a902_4ab4_85ad_550d03c8a91d.slice/crio-444141ee45d176671c05c8e24a886951102699f406a3df4524500adde6fb05cf WatchSource:0}: Error finding container 444141ee45d176671c05c8e24a886951102699f406a3df4524500adde6fb05cf: Status 404 returned error can't find the container with id 444141ee45d176671c05c8e24a886951102699f406a3df4524500adde6fb05cf Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.097307 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.113325 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.134911 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.135618 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.136461 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.636438879 +0000 UTC m=+144.052252720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.137398 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.137936 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.637913581 +0000 UTC m=+144.053727422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.142199 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs"] Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.154891 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 13:44:58 crc kubenswrapper[4900]: W1202 13:44:58.168909 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0d1a0ea_5032_423f_ac08_c236f60fea7f.slice/crio-e678e2dbe57b5d67b92f9ddac9b823d50a25ba337ca65602f9a2e5aa60fa1b5b WatchSource:0}: Error finding container e678e2dbe57b5d67b92f9ddac9b823d50a25ba337ca65602f9a2e5aa60fa1b5b: Status 404 returned error can't find the container with id e678e2dbe57b5d67b92f9ddac9b823d50a25ba337ca65602f9a2e5aa60fa1b5b Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.174249 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.202894 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.212182 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5mkrz"] Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.213605 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.235565 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.238660 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.238853 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.738819539 +0000 UTC m=+144.154633390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.239324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.239861 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.739837747 +0000 UTC m=+144.155651788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.255198 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.275710 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.294323 4900 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.310808 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4s6cc"] Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.314414 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.335627 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.341290 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.341491 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.841448525 +0000 UTC m=+144.257262376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.341911 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.342347 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.842324759 +0000 UTC m=+144.258138610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.353263 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.375377 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.394171 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.414078 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.434805 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443053 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443315 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64482a60-f5ce-47c1-9389-3945ebe3087d-config\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443371 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d7155d-17a2-4191-8a85-9b6277641b28-cert\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443455 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/725a6e19-5648-4f21-8405-1b6f29d6e9be-metrics-tls\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443485 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-cabundle\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443564 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c62ac3-afa3-4940-83b5-7f071a231367-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443615 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/725a6e19-5648-4f21-8405-1b6f29d6e9be-config-volume\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443660 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-node-bootstrap-token\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443684 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-key\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443719 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443749 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-certs\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443797 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64482a60-f5ce-47c1-9389-3945ebe3087d-serving-cert\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.443900 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c62ac3-afa3-4940-83b5-7f071a231367-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.444876 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:58.944859313 +0000 UTC m=+144.360673164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.445412 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64482a60-f5ce-47c1-9389-3945ebe3087d-config\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.446725 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/725a6e19-5648-4f21-8405-1b6f29d6e9be-config-volume\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.448011 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c62ac3-afa3-4940-83b5-7f071a231367-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.448018 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-cabundle\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.448626 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.452681 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.453770 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-signing-key\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.453969 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.455885 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64482a60-f5ce-47c1-9389-3945ebe3087d-serving-cert\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.456975 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d7155d-17a2-4191-8a85-9b6277641b28-cert\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.457003 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c62ac3-afa3-4940-83b5-7f071a231367-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.460594 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/725a6e19-5648-4f21-8405-1b6f29d6e9be-metrics-tls\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.472621 4900 request.go:700] Waited for 1.938130487s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.475311 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.485661 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-certs\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.493428 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.501501 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-node-bootstrap-token\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.517204 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.545010 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.545748 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.04572538 +0000 UTC m=+144.461539231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.567921 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqnsp\" (UniqueName: \"kubernetes.io/projected/eb44d009-5920-4606-aba3-aaf7104b1a22-kube-api-access-bqnsp\") pod \"oauth-openshift-558db77b4-dbc7k\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.577318 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-bound-sa-token\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.598725 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtlwm\" (UniqueName: \"kubernetes.io/projected/c782003f-e8d3-4aa5-aba6-0db2706d4e43-kube-api-access-xtlwm\") pod \"openshift-config-operator-7777fb866f-87drk\" (UID: \"c782003f-e8d3-4aa5-aba6-0db2706d4e43\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.604651 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.612777 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlpc9\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-kube-api-access-tlpc9\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.629571 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdsk5\" (UniqueName: \"kubernetes.io/projected/e5276c97-9e84-4632-98dc-43d3d4c1fefd-kube-api-access-mdsk5\") pod \"openshift-apiserver-operator-796bbdcf4f-6ch9f\" (UID: \"e5276c97-9e84-4632-98dc-43d3d4c1fefd\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.646004 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.646431 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.146408832 +0000 UTC m=+144.562222673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.647531 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcc7c\" (UniqueName: \"kubernetes.io/projected/0bfc0222-1bd6-4891-a1d5-2c6e53bd7592-kube-api-access-vcc7c\") pod \"authentication-operator-69f744f599-rl4bn\" (UID: \"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.672658 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8v22\" (UniqueName: \"kubernetes.io/projected/65ea3056-b990-4a94-a5aa-56a2a0f24b92-kube-api-access-j8v22\") pod \"route-controller-manager-6576b87f9c-8qpd6\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.700562 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m46fb\" (UniqueName: \"kubernetes.io/projected/0afb9a31-c9fa-465a-9b2e-856ec706f5aa-kube-api-access-m46fb\") pod \"apiserver-7bbb656c7d-xms5s\" (UID: \"0afb9a31-c9fa-465a-9b2e-856ec706f5aa\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.711677 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.713381 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mxsd\" (UniqueName: \"kubernetes.io/projected/a72dd04d-bb06-4b3a-9f08-d68072239bd8-kube-api-access-5mxsd\") pod \"controller-manager-879f6c89f-mq2gm\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.730134 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.748567 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.749052 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.249034808 +0000 UTC m=+144.664848659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.754333 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2khs\" (UniqueName: \"kubernetes.io/projected/64482a60-f5ce-47c1-9389-3945ebe3087d-kube-api-access-v2khs\") pod \"service-ca-operator-777779d784-wsrhj\" (UID: \"64482a60-f5ce-47c1-9389-3945ebe3087d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.766752 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svzvc\" (UniqueName: \"kubernetes.io/projected/629226c0-c1d6-4d74-a041-7eb24832256f-kube-api-access-svzvc\") pod \"router-default-5444994796-5z2g6\" (UID: \"629226c0-c1d6-4d74-a041-7eb24832256f\") " pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.788834 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpj9\" (UniqueName: \"kubernetes.io/projected/98bca191-5b30-464c-89ad-01df623a1728-kube-api-access-8jpj9\") pod \"catalog-operator-68c6474976-7tvc9\" (UID: \"98bca191-5b30-464c-89ad-01df623a1728\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.792090 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.795424 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.808680 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dbc7k"] Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.813385 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzpp\" (UniqueName: \"kubernetes.io/projected/87d7155d-17a2-4191-8a85-9b6277641b28-kube-api-access-qdzpp\") pod \"ingress-canary-7cgf2\" (UID: \"87d7155d-17a2-4191-8a85-9b6277641b28\") " pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.814449 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.823826 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7cgf2" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.834235 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlmv\" (UniqueName: \"kubernetes.io/projected/1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf-kube-api-access-9rlmv\") pod \"machine-config-server-lpm4d\" (UID: \"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf\") " pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.837780 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lpm4d" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.849325 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.849421 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.349395351 +0000 UTC m=+144.765209202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.849541 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" event={"ID":"aaa314e8-a902-4ab4-85ad-550d03c8a91d","Type":"ContainerStarted","Data":"8fc2c0ec627705e013e7b8a52af97a12ac6a9d14f3a9b1f47305908739205134"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.849563 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.849583 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" event={"ID":"aaa314e8-a902-4ab4-85ad-550d03c8a91d","Type":"ContainerStarted","Data":"0d6cb0f8a3c11b3b0772124d10e12f893e209d60cf85d3025348c9ebc00021f8"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.849594 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" event={"ID":"aaa314e8-a902-4ab4-85ad-550d03c8a91d","Type":"ContainerStarted","Data":"444141ee45d176671c05c8e24a886951102699f406a3df4524500adde6fb05cf"} Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.850116 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.350100781 +0000 UTC m=+144.765914632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.856582 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rwrg\" (UniqueName: \"kubernetes.io/projected/4e4344cf-f5b8-49d4-91e1-9726ea4e6197-kube-api-access-5rwrg\") pod \"service-ca-9c57cc56f-mrs5b\" (UID: \"4e4344cf-f5b8-49d4-91e1-9726ea4e6197\") " pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.864570 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.868131 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7k6w\" (UniqueName: \"kubernetes.io/projected/526979fe-578d-44b2-b8af-b02c7d712f7a-kube-api-access-c7k6w\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8h78\" (UID: \"526979fe-578d-44b2-b8af-b02c7d712f7a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.896359 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" event={"ID":"3bb063e9-c943-4b94-9196-80357b0fd832","Type":"ContainerStarted","Data":"0be70fc84ed02842d480a8bade348962bb14c0a2df0c01d75372da9034ccf056"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.896427 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" event={"ID":"3bb063e9-c943-4b94-9196-80357b0fd832","Type":"ContainerStarted","Data":"41d6d7971e195954d48823eb99025e8a312b843724caa405e77d0eb47181ad98"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.896442 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" event={"ID":"3bb063e9-c943-4b94-9196-80357b0fd832","Type":"ContainerStarted","Data":"bfdb89691d8a82cb348a588eb5fbec9b23a895d00473dc6df4fa600734c6dd42"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.900540 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" event={"ID":"eb44d009-5920-4606-aba3-aaf7104b1a22","Type":"ContainerStarted","Data":"c820e8f56dc30dfb25925b657ef8e62f670261e15215f14f989a378920312461"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.905781 4900 generic.go:334] "Generic (PLEG): container finished" podID="52279fab-53ca-41cf-8370-bbc4821be6c2" containerID="17ceca1479af846dc0a957b0b071f5e8b27b704d9eab21a877c0aeacde066c4f" exitCode=0 Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.905852 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" event={"ID":"52279fab-53ca-41cf-8370-bbc4821be6c2","Type":"ContainerDied","Data":"17ceca1479af846dc0a957b0b071f5e8b27b704d9eab21a877c0aeacde066c4f"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.907135 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzrw\" (UniqueName: \"kubernetes.io/projected/312ac034-6fc6-4ceb-bb05-56d80e07a205-kube-api-access-qdzrw\") pod \"dns-operator-744455d44c-8d5sp\" (UID: \"312ac034-6fc6-4ceb-bb05-56d80e07a205\") " pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.907605 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" event={"ID":"52279fab-53ca-41cf-8370-bbc4821be6c2","Type":"ContainerStarted","Data":"9efd1dd84790541591e1fb861e5c62c527e1053e7195f0d4a9a103659282fe85"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.918805 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sq8l\" (UniqueName: \"kubernetes.io/projected/4dda8c36-2a02-4199-a2e3-33ae4a218883-kube-api-access-6sq8l\") pod \"csi-hostpathplugin-77dgs\" (UID: \"4dda8c36-2a02-4199-a2e3-33ae4a218883\") " pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.931213 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rghqv\" (UniqueName: \"kubernetes.io/projected/b29ce520-0853-4925-9974-165b2a41bcfa-kube-api-access-rghqv\") pod \"packageserver-d55dfcdfc-tf9z8\" (UID: \"b29ce520-0853-4925-9974-165b2a41bcfa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:58 crc kubenswrapper[4900]: W1202 13:44:58.933730 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bfe5759_de2d_4f6e_a1fb_f5b659c1d9cf.slice/crio-7b03acbeee225fc284f6aa246d133aa02408d17993bd3d59cdaaea6c43390ecc WatchSource:0}: Error finding container 7b03acbeee225fc284f6aa246d133aa02408d17993bd3d59cdaaea6c43390ecc: Status 404 returned error can't find the container with id 7b03acbeee225fc284f6aa246d133aa02408d17993bd3d59cdaaea6c43390ecc Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.939325 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.948274 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b725394f-0913-4f97-b61e-6906b21741be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zbsck\" (UID: \"b725394f-0913-4f97-b61e-6906b21741be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.950576 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:58 crc kubenswrapper[4900]: E1202 13:44:58.953877 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.453845138 +0000 UTC m=+144.869658989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963016 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" event={"ID":"a0d1a0ea-5032-423f-ac08-c236f60fea7f","Type":"ContainerStarted","Data":"9721610b38005abac176455f4e4e9bbde640ea4a56cea8a4cce2cbf05fad7880"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963147 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963177 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" event={"ID":"a0d1a0ea-5032-423f-ac08-c236f60fea7f","Type":"ContainerStarted","Data":"e678e2dbe57b5d67b92f9ddac9b823d50a25ba337ca65602f9a2e5aa60fa1b5b"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963204 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" event={"ID":"35913a77-057b-4aab-b923-97ce7871c010","Type":"ContainerStarted","Data":"e43f62c95098412a6c461f660c8a0c2029b9c587f4586daf7e632fe555685124"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963227 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" event={"ID":"35913a77-057b-4aab-b923-97ce7871c010","Type":"ContainerStarted","Data":"4fff4154cf6c89afc73f624548565f00bc95582e55fff7397c1de13b31dd7f5f"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963237 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" event={"ID":"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac","Type":"ContainerStarted","Data":"6564a7c151f4ce325b234ce16cf2d026345a7efeb5f980e4d4e579c1bb02a894"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963248 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" event={"ID":"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac","Type":"ContainerStarted","Data":"2e922ae9bb4480d2c80b161ae5359da73548e8deaef74071980bf4d5efe20ea9"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963260 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" event={"ID":"6808a6ac-e5cb-44ae-a0a6-dfe555d727ac","Type":"ContainerStarted","Data":"a9219162c9a23fe406be5ad3bfe60a4ed7f4708fd4b0da6c598e1df44172b3d5"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963270 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" event={"ID":"d34e4093-59b0-4aba-b254-5671e760b208","Type":"ContainerStarted","Data":"453d8aa276f9244fd8c1570769d7bdbf67cddfc85bdf2585d8a00b1cdf75e641"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963282 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" event={"ID":"d34e4093-59b0-4aba-b254-5671e760b208","Type":"ContainerStarted","Data":"8bd78e39a81d865f88f5cb587bf96f55e99c0f35989ee6454f4f9ad3a8ab29a6"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.963293 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" event={"ID":"d34e4093-59b0-4aba-b254-5671e760b208","Type":"ContainerStarted","Data":"6e03b6df6633d71f5bfd026e000280a09875ebee412fe1e856ff67895f297616"} Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.987093 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.995757 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:44:58 crc kubenswrapper[4900]: I1202 13:44:58.996789 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrft\" (UniqueName: \"kubernetes.io/projected/c9bd8a2c-57b4-40b4-b931-16496b5236a0-kube-api-access-fqrft\") pod \"control-plane-machine-set-operator-78cbb6b69f-n7qvb\" (UID: \"c9bd8a2c-57b4-40b4-b931-16496b5236a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.019242 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca3a2685-a14d-4ffe-8f76-55de65b5841b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qsd4r\" (UID: \"ca3a2685-a14d-4ffe-8f76-55de65b5841b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.020168 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.025655 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.034600 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.040197 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbx2g\" (UniqueName: \"kubernetes.io/projected/55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d-kube-api-access-lbx2g\") pod \"machine-config-controller-84d6567774-srz7l\" (UID: \"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.050888 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhkh\" (UniqueName: \"kubernetes.io/projected/ab3adbee-6d24-4396-a6a8-dfd4e5255627-kube-api-access-dkhkh\") pod \"package-server-manager-789f6589d5-q5tmt\" (UID: \"ab3adbee-6d24-4396-a6a8-dfd4e5255627\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.054357 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.058220 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.558195022 +0000 UTC m=+144.974008873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.058542 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3df749bb-0f54-4f5b-b9b9-cf46babaf698-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bkwqf\" (UID: \"3df749bb-0f54-4f5b-b9b9-cf46babaf698\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.060223 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.077422 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.080609 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js2p5\" (UniqueName: \"kubernetes.io/projected/725a6e19-5648-4f21-8405-1b6f29d6e9be-kube-api-access-js2p5\") pod \"dns-default-xtsj2\" (UID: \"725a6e19-5648-4f21-8405-1b6f29d6e9be\") " pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.096395 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn7gg\" (UniqueName: \"kubernetes.io/projected/de46dffd-919a-4df1-9d52-cbf1d14b8205-kube-api-access-jn7gg\") pod \"marketplace-operator-79b997595-tgtk7\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.096581 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.116658 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.131421 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xtsj2" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.131495 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc8qv\" (UniqueName: \"kubernetes.io/projected/95616fe1-4979-433d-afce-3235d5dab8a5-kube-api-access-vc8qv\") pod \"console-f9d7485db-vrdh8\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.134672 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.149496 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc2zz\" (UniqueName: \"kubernetes.io/projected/d8c62ac3-afa3-4940-83b5-7f071a231367-kube-api-access-zc2zz\") pod \"openshift-controller-manager-operator-756b6f6bc6-ldz9n\" (UID: \"d8c62ac3-afa3-4940-83b5-7f071a231367\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.159221 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.159829 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.6598033 +0000 UTC m=+145.075617151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.162740 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7k7\" (UniqueName: \"kubernetes.io/projected/535396d3-f3c8-4175-a498-526e02960674-kube-api-access-gc7k7\") pod \"collect-profiles-29411370-bpgg9\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.172542 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.179518 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpc7q\" (UniqueName: \"kubernetes.io/projected/c6ae55d7-1773-484c-9657-a6438f072dee-kube-api-access-cpc7q\") pod \"downloads-7954f5f757-5wqjj\" (UID: \"c6ae55d7-1773-484c-9657-a6438f072dee\") " pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.191134 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mq2gm"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.199406 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.200218 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t9nf\" (UniqueName: \"kubernetes.io/projected/0e8357b0-15fc-4b14-87b5-fdd058c316f3-kube-api-access-5t9nf\") pod \"machine-config-operator-74547568cd-skddd\" (UID: \"0e8357b0-15fc-4b14-87b5-fdd058c316f3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.228855 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84m4c\" (UniqueName: \"kubernetes.io/projected/b3edd4c5-0d86-4f99-bb00-e9b134cda502-kube-api-access-84m4c\") pod \"migrator-59844c95c7-6pj4t\" (UID: \"b3edd4c5-0d86-4f99-bb00-e9b134cda502\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.235033 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789g6\" (UniqueName: \"kubernetes.io/projected/3040cbe6-2783-43e2-9786-89fb91444b8e-kube-api-access-789g6\") pod \"etcd-operator-b45778765-kd2ql\" (UID: \"3040cbe6-2783-43e2-9786-89fb91444b8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.244554 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.254571 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.258535 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.261886 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.262256 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.762239491 +0000 UTC m=+145.178053342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.265522 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.272338 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mww9c\" (UniqueName: \"kubernetes.io/projected/67af6ca9-8ecc-4615-8d7a-670914a7d5f5-kube-api-access-mww9c\") pod \"olm-operator-6b444d44fb-sqk29\" (UID: \"67af6ca9-8ecc-4615-8d7a-670914a7d5f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.274071 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.274489 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.275354 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7cgf2"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.277150 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95wxv\" (UniqueName: \"kubernetes.io/projected/f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f-kube-api-access-95wxv\") pod \"multus-admission-controller-857f4d67dd-dtq92\" (UID: \"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.279336 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" Dec 02 13:44:59 crc kubenswrapper[4900]: W1202 13:44:59.286883 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64482a60_f5ce_47c1_9389_3945ebe3087d.slice/crio-9896527bb0d9e565883cddeee17245f02a8806ba0296094a366c01c778ce46a5 WatchSource:0}: Error finding container 9896527bb0d9e565883cddeee17245f02a8806ba0296094a366c01c778ce46a5: Status 404 returned error can't find the container with id 9896527bb0d9e565883cddeee17245f02a8806ba0296094a366c01c778ce46a5 Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.295160 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.299038 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.299276 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.304609 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.314486 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.335016 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.336920 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.338988 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.344950 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.356603 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.359805 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.364437 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.365049 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.865026661 +0000 UTC m=+145.280840512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.370929 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.376929 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.384074 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.465942 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.466772 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:44:59.966755053 +0000 UTC m=+145.382568904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.553268 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8d5sp"] Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.568530 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.068493904 +0000 UTC m=+145.484307755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.568298 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.570187 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.570627 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.070616683 +0000 UTC m=+145.486430524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.589494 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rl4bn"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.605069 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.671775 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.672139 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.172116168 +0000 UTC m=+145.587930019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.701830 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-87drk"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.710910 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.774675 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.775220 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.275193817 +0000 UTC m=+145.691007668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.876304 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.876513 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.376468845 +0000 UTC m=+145.792282696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.876864 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.877210 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.377194485 +0000 UTC m=+145.793008336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.888271 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-77dgs"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.895073 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.901979 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.912733 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.926763 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xtsj2"] Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.944451 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" event={"ID":"e5276c97-9e84-4632-98dc-43d3d4c1fefd","Type":"ContainerStarted","Data":"6d850574461fa27ae98d9a456ef124f6bcf01eb1fafc4e094a9b1d13d8ce0950"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.946625 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" event={"ID":"526979fe-578d-44b2-b8af-b02c7d712f7a","Type":"ContainerStarted","Data":"d235fc1a13833c600c5c7928ebc4e71275b6c6f82952bbc5403f8d5c4f79472e"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.948399 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" event={"ID":"a72dd04d-bb06-4b3a-9f08-d68072239bd8","Type":"ContainerStarted","Data":"e292cf1fbe3da66874b438ca007155905a77917a333e39c8b3a0eab3956f34b3"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.953145 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" event={"ID":"52279fab-53ca-41cf-8370-bbc4821be6c2","Type":"ContainerStarted","Data":"452c9532fb0c51f10219a193dd196bb62a9e8b968e89c28c8c827110c3e89d57"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.959633 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lpm4d" event={"ID":"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf","Type":"ContainerStarted","Data":"610cc035d3ab1de28e50829f22a64d9698a27b39e90118621072b17a4d339a99"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.959699 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lpm4d" event={"ID":"1bfe5759-de2d-4f6e-a1fb-f5b659c1d9cf","Type":"ContainerStarted","Data":"7b03acbeee225fc284f6aa246d133aa02408d17993bd3d59cdaaea6c43390ecc"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.961412 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" event={"ID":"65ea3056-b990-4a94-a5aa-56a2a0f24b92","Type":"ContainerStarted","Data":"2ed645e8285949297b56ce5a825c42c6f6125f5a4c284065f853ce3a6c2834bb"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.962444 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" event={"ID":"c782003f-e8d3-4aa5-aba6-0db2706d4e43","Type":"ContainerStarted","Data":"68b92652723c9805b58fad15ce079fb142450f2b9e8be1545ae131077624712a"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.970926 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7cgf2" event={"ID":"87d7155d-17a2-4191-8a85-9b6277641b28","Type":"ContainerStarted","Data":"97cf6ba7af21de91f0a661b26147129db44a2cd2e88e59c39738c0cdcedc291f"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.976382 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" event={"ID":"64482a60-f5ce-47c1-9389-3945ebe3087d","Type":"ContainerStarted","Data":"9896527bb0d9e565883cddeee17245f02a8806ba0296094a366c01c778ce46a5"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.977401 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.977504 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.477478826 +0000 UTC m=+145.893292677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.977797 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:44:59 crc kubenswrapper[4900]: E1202 13:44:59.978188 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.478177355 +0000 UTC m=+145.893991206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.981816 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" event={"ID":"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592","Type":"ContainerStarted","Data":"cf6f8fe196e31bad5ce924f35457cdfb6d826419fe32ce8b0990c1caffda885b"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.994732 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" event={"ID":"eb44d009-5920-4606-aba3-aaf7104b1a22","Type":"ContainerStarted","Data":"29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9"} Dec 02 13:44:59 crc kubenswrapper[4900]: I1202 13:44:59.995790 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:44:59.999981 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" event={"ID":"312ac034-6fc6-4ceb-bb05-56d80e07a205","Type":"ContainerStarted","Data":"e16000c0b3bde8fadee7ab04fd6dd4e2a79d2d5a3f922d29b22612945ef324a4"} Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.002081 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5z2g6" event={"ID":"629226c0-c1d6-4d74-a041-7eb24832256f","Type":"ContainerStarted","Data":"de48bd0bae4779d99a743c37f76f2dae937b4eca4619d8792c64dbf334e15ce8"} Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.002118 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5z2g6" event={"ID":"629226c0-c1d6-4d74-a041-7eb24832256f","Type":"ContainerStarted","Data":"87599c7d85030b9989cc023217b2ee24049edc3d590dba55685256547b62a991"} Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.005565 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" event={"ID":"0afb9a31-c9fa-465a-9b2e-856ec706f5aa","Type":"ContainerStarted","Data":"709b0c010275c286b2416d16f80f1c3a7c15b920771d049917c885a817ebf207"} Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.023096 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.031202 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:00 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:00 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:00 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.031263 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.053521 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mrs5b"] Dec 02 13:45:00 crc kubenswrapper[4900]: W1202 13:45:00.053616 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod725a6e19_5648_4f21_8405_1b6f29d6e9be.slice/crio-0a9977e3585350c9feff09452b49eeb79b67318ea352b95a3cb82f5733079565 WatchSource:0}: Error finding container 0a9977e3585350c9feff09452b49eeb79b67318ea352b95a3cb82f5733079565: Status 404 returned error can't find the container with id 0a9977e3585350c9feff09452b49eeb79b67318ea352b95a3cb82f5733079565 Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.079138 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.090783 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.59075448 +0000 UTC m=+146.006568331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.091830 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5mkrz" podStartSLOduration=124.091801659 podStartE2EDuration="2m4.091801659s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:00.091072478 +0000 UTC m=+145.506886329" watchObservedRunningTime="2025-12-02 13:45:00.091801659 +0000 UTC m=+145.507615510" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.100821 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vrdh8"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.131242 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5wqjj"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.132975 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.161510 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.163273 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.178518 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.219538 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-secret-volume\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.220469 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-config-volume\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.220535 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7wvz\" (UniqueName: \"kubernetes.io/projected/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-kube-api-access-m7wvz\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.220632 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.224184 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.724163984 +0000 UTC m=+146.139977835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.321807 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.322863 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.82283483 +0000 UTC m=+146.238648681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.323100 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-secret-volume\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.323151 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-config-volume\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.323179 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7wvz\" (UniqueName: \"kubernetes.io/projected/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-kube-api-access-m7wvz\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.323236 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.323613 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.823606492 +0000 UTC m=+146.239420343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.325989 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-config-volume\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.359362 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-secret-volume\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.376970 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7wvz\" (UniqueName: \"kubernetes.io/projected/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-kube-api-access-m7wvz\") pod \"collect-profiles-29411385-l4l7p\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.428576 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.429360 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:00.929337404 +0000 UTC m=+146.345151255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.437559 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.458546 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kd2ql"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.461262 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.471076 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.486555 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-skddd"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.509635 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.531567 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.532062 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.032046993 +0000 UTC m=+146.447860854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: W1202 13:45:00.548261 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ae55d7_1773_484c_9657_a6438f072dee.slice/crio-0e67d4fdabae515a7ba2640bbd152be856f2576a3a3856b39de8b30956609891 WatchSource:0}: Error finding container 0e67d4fdabae515a7ba2640bbd152be856f2576a3a3856b39de8b30956609891: Status 404 returned error can't find the container with id 0e67d4fdabae515a7ba2640bbd152be856f2576a3a3856b39de8b30956609891 Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.555865 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xcml8" podStartSLOduration=123.555833787 podStartE2EDuration="2m3.555833787s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:00.529768029 +0000 UTC m=+145.945581880" watchObservedRunningTime="2025-12-02 13:45:00.555833787 +0000 UTC m=+145.971647658" Dec 02 13:45:00 crc kubenswrapper[4900]: W1202 13:45:00.593824 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9bd8a2c_57b4_40b4_b931_16496b5236a0.slice/crio-23246cbbfcd73ea21faa74cb42a8e6f32001109886580bcb44f4bdab963419ff WatchSource:0}: Error finding container 23246cbbfcd73ea21faa74cb42a8e6f32001109886580bcb44f4bdab963419ff: Status 404 returned error can't find the container with id 23246cbbfcd73ea21faa74cb42a8e6f32001109886580bcb44f4bdab963419ff Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.633752 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.634472 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.134449593 +0000 UTC m=+146.550263444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.712089 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.735409 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.735822 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.235804993 +0000 UTC m=+146.651618844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.741546 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgtk7"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.743889 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.754714 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.773280 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.788417 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dtq92"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.808026 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.810164 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cjdbn" podStartSLOduration=124.810134809 podStartE2EDuration="2m4.810134809s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:00.808246487 +0000 UTC m=+146.224060338" watchObservedRunningTime="2025-12-02 13:45:00.810134809 +0000 UTC m=+146.225948660" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.819270 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29"] Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.838130 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.838553 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.338532392 +0000 UTC m=+146.754346243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: W1202 13:45:00.860969 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod535396d3_f3c8_4175_a498_526e02960674.slice/crio-4f9ed5ba15fed1f63a27ef5d74572ccbfa830e9704232a90e3291aee4418bd3d WatchSource:0}: Error finding container 4f9ed5ba15fed1f63a27ef5d74572ccbfa830e9704232a90e3291aee4418bd3d: Status 404 returned error can't find the container with id 4f9ed5ba15fed1f63a27ef5d74572ccbfa830e9704232a90e3291aee4418bd3d Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.895475 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7s7qs" podStartSLOduration=124.895450932 podStartE2EDuration="2m4.895450932s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:00.838538493 +0000 UTC m=+146.254352344" watchObservedRunningTime="2025-12-02 13:45:00.895450932 +0000 UTC m=+146.311264783" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.949563 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.949617 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.949742 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.949773 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.950357 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt"] Dec 02 13:45:00 crc kubenswrapper[4900]: E1202 13:45:00.950785 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.450766757 +0000 UTC m=+146.866580608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.950832 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.952213 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.961685 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.983325 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:00 crc kubenswrapper[4900]: I1202 13:45:00.989859 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.025383 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:01 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:01 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:01 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.025451 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.046159 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5t42f" podStartSLOduration=125.04613607 podStartE2EDuration="2m5.04613607s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.014846646 +0000 UTC m=+146.430660507" watchObservedRunningTime="2025-12-02 13:45:01.04613607 +0000 UTC m=+146.461949921" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.058153 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.058498 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.558476435 +0000 UTC m=+146.974290286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.062798 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" event={"ID":"4e4344cf-f5b8-49d4-91e1-9726ea4e6197","Type":"ContainerStarted","Data":"30b7ee788dc17cfcc5eb02536b838860c4b74b18ccea1194f3fb63b04c43f844"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.062886 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" event={"ID":"4e4344cf-f5b8-49d4-91e1-9726ea4e6197","Type":"ContainerStarted","Data":"e82df2a473644653ec883bcb25b9ddbc6cb80fbb7109f5aa2e5cc060ace22249"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.081662 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5n647" podStartSLOduration=125.081630362 podStartE2EDuration="2m5.081630362s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.079688977 +0000 UTC m=+146.495502828" watchObservedRunningTime="2025-12-02 13:45:01.081630362 +0000 UTC m=+146.497444203" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.085810 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p"] Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.106481 4900 generic.go:334] "Generic (PLEG): container finished" podID="0afb9a31-c9fa-465a-9b2e-856ec706f5aa" containerID="2ef7883c3842e8726b3d96cd80c99dde1c69070d687aeae79bfd8c707a31ce98" exitCode=0 Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.107046 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" event={"ID":"0afb9a31-c9fa-465a-9b2e-856ec706f5aa","Type":"ContainerDied","Data":"2ef7883c3842e8726b3d96cd80c99dde1c69070d687aeae79bfd8c707a31ce98"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.131857 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.143722 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" event={"ID":"ca3a2685-a14d-4ffe-8f76-55de65b5841b","Type":"ContainerStarted","Data":"f033cd17a299dd8c2151b90fb3d0bd720855019fbbb186288e383b05c7184030"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.161449 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.162256 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.163101 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.663082736 +0000 UTC m=+147.078896587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.174837 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.176737 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" event={"ID":"d8c62ac3-afa3-4940-83b5-7f071a231367","Type":"ContainerStarted","Data":"492f6e73ce64e9957ca860191966b7c5c9cf7090752bd5c5a008c9c17c6e8391"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.196077 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" event={"ID":"3040cbe6-2783-43e2-9786-89fb91444b8e","Type":"ContainerStarted","Data":"cc1a02cd8c89a7affbda3bc42e6e7537cc77a51aa9587fd83c40e936e60a4e42"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.224229 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrdh8" event={"ID":"95616fe1-4979-433d-afce-3235d5dab8a5","Type":"ContainerStarted","Data":"991da6f2d96e49dfd3268b02811be54191698ccecea282287b381e77bfb0e7b2"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.243504 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" event={"ID":"526979fe-578d-44b2-b8af-b02c7d712f7a","Type":"ContainerStarted","Data":"95ec6ee819a0370fb59fa12172c94d3769149ca61f9f3b03e2a2a1ff811b043d"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.263246 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" event={"ID":"b29ce520-0853-4925-9974-165b2a41bcfa","Type":"ContainerStarted","Data":"2208251c825e286f5f9942df52d373cd25d5288842922204b433b3f4c126d7b0"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.263301 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" event={"ID":"b29ce520-0853-4925-9974-165b2a41bcfa","Type":"ContainerStarted","Data":"8902a26c5ace76637e8f7af9079ba87f934ee0e422ff61b938572cb24663f86d"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.264322 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.265994 4900 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tf9z8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.266054 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" podUID="b29ce520-0853-4925-9974-165b2a41bcfa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.267322 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" event={"ID":"65ea3056-b990-4a94-a5aa-56a2a0f24b92","Type":"ContainerStarted","Data":"f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.267476 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.268323 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.268621 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.768596733 +0000 UTC m=+147.184410584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.271114 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.271320 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" event={"ID":"3df749bb-0f54-4f5b-b9b9-cf46babaf698","Type":"ContainerStarted","Data":"1d69f65964bf3bcc36907a6e04b9166ed97733d842e253f67480afea953ffdf1"} Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.271543 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.771527995 +0000 UTC m=+147.187341846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.278130 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" event={"ID":"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d","Type":"ContainerStarted","Data":"29a65de5a13994b7a8646ef9d66c9663b3162319f1086f986531c43e594279de"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.325534 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lpm4d" podStartSLOduration=5.325514803 podStartE2EDuration="5.325514803s" podCreationTimestamp="2025-12-02 13:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.324036862 +0000 UTC m=+146.739850713" watchObservedRunningTime="2025-12-02 13:45:01.325514803 +0000 UTC m=+146.741328654" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.336034 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" event={"ID":"98bca191-5b30-464c-89ad-01df623a1728","Type":"ContainerStarted","Data":"9524751c120bde0b383859864e5114fbb7ebc18f0568ae4301f316f7a8bfb84d"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.336097 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" event={"ID":"98bca191-5b30-464c-89ad-01df623a1728","Type":"ContainerStarted","Data":"3f265976654e4b9a40520e96f8aba8aff1c31c23f36153f027af3d0442f09ea7"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.337825 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.342258 4900 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7tvc9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.342317 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" podUID="98bca191-5b30-464c-89ad-01df623a1728" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.367408 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" podStartSLOduration=125.367383332 podStartE2EDuration="2m5.367383332s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.365244352 +0000 UTC m=+146.781058203" watchObservedRunningTime="2025-12-02 13:45:01.367383332 +0000 UTC m=+146.783197183" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.375151 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.375341 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.875304343 +0000 UTC m=+147.291118194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.375452 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.377672 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.877662859 +0000 UTC m=+147.293476710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.387304 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5wqjj" event={"ID":"c6ae55d7-1773-484c-9657-a6438f072dee","Type":"ContainerStarted","Data":"0e67d4fdabae515a7ba2640bbd152be856f2576a3a3856b39de8b30956609891"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.388158 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.389447 4900 patch_prober.go:28] interesting pod/downloads-7954f5f757-5wqjj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.389492 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5wqjj" podUID="c6ae55d7-1773-484c-9657-a6438f072dee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.470127 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" event={"ID":"52279fab-53ca-41cf-8370-bbc4821be6c2","Type":"ContainerStarted","Data":"7af515c0518af891fe34bc41febff3264e7b3b97d701dfce4750d77fef0d46f9"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.473254 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" podStartSLOduration=124.473244539 podStartE2EDuration="2m4.473244539s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.471373806 +0000 UTC m=+146.887187657" watchObservedRunningTime="2025-12-02 13:45:01.473244539 +0000 UTC m=+146.889058380" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.473337 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5z2g6" podStartSLOduration=125.473333061 podStartE2EDuration="2m5.473333061s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.426088222 +0000 UTC m=+146.841902073" watchObservedRunningTime="2025-12-02 13:45:01.473333061 +0000 UTC m=+146.889146902" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.479793 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.480676 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:01.980652285 +0000 UTC m=+147.396466136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.514166 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" event={"ID":"64482a60-f5ce-47c1-9389-3945ebe3087d","Type":"ContainerStarted","Data":"ca3129c106197e2b03005bf3dfbdac6fb4d3428f6cd70e331306768a4855e13b"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.520986 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" event={"ID":"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f","Type":"ContainerStarted","Data":"ddcd7ff3c7d718f3a3262e6617ec728eaf2a665ab627f541d70202fd17747e88"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.530285 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" podStartSLOduration=124.530261281 podStartE2EDuration="2m4.530261281s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.513351619 +0000 UTC m=+146.929165470" watchObservedRunningTime="2025-12-02 13:45:01.530261281 +0000 UTC m=+146.946075132" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.545659 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" event={"ID":"e5276c97-9e84-4632-98dc-43d3d4c1fefd","Type":"ContainerStarted","Data":"306bfdb74aa5a02279705b2cfb25a58588b389afd1241308a9f2c82d4ba98a39"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.581633 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.583682 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.083667202 +0000 UTC m=+147.499481053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.584548 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8h78" podStartSLOduration=124.584523216 podStartE2EDuration="2m4.584523216s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.583240081 +0000 UTC m=+146.999053932" watchObservedRunningTime="2025-12-02 13:45:01.584523216 +0000 UTC m=+147.000337067" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.593521 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" event={"ID":"312ac034-6fc6-4ceb-bb05-56d80e07a205","Type":"ContainerStarted","Data":"9c16f4c02ad01c85249ee6cacfbd8502d9410cc10734777cb99a6807e42440c0"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.660237 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" event={"ID":"de46dffd-919a-4df1-9d52-cbf1d14b8205","Type":"ContainerStarted","Data":"74a0d6da2851a18d3196b2877f01ae6bbfd1c4280a34eb17eaa1ba3890c6cc3a"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.683712 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" event={"ID":"b3edd4c5-0d86-4f99-bb00-e9b134cda502","Type":"ContainerStarted","Data":"2283b31dbd822156aa5042565118360b7894a0394b3380eff683da570da27259"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.687294 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.688798 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.188763118 +0000 UTC m=+147.604576969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.706126 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" event={"ID":"b725394f-0913-4f97-b61e-6906b21741be","Type":"ContainerStarted","Data":"4c22069d4c86d6811d88e8383ec60f0efc736a1c36542605cdd1362994434a59"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.708510 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" event={"ID":"0bfc0222-1bd6-4891-a1d5-2c6e53bd7592","Type":"ContainerStarted","Data":"f6a2cdec919c31b0b21f7154497cdaded3eeab570929c199ad64e42d1d24ee73"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.722221 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" event={"ID":"4dda8c36-2a02-4199-a2e3-33ae4a218883","Type":"ContainerStarted","Data":"f135677a6231529a1cec3a5a189319405c6d9d8bf57191f033fa44255e2fabc5"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.725129 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" event={"ID":"0e8357b0-15fc-4b14-87b5-fdd058c316f3","Type":"ContainerStarted","Data":"031371de31b5782b61e0d4cff7270bdae92c387b182a481a7692ea298fe39111"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.726475 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mrs5b" podStartSLOduration=124.72646001 podStartE2EDuration="2m4.72646001s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.725476783 +0000 UTC m=+147.141290624" watchObservedRunningTime="2025-12-02 13:45:01.72646001 +0000 UTC m=+147.142273861" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.733623 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" event={"ID":"535396d3-f3c8-4175-a498-526e02960674","Type":"ContainerStarted","Data":"4f9ed5ba15fed1f63a27ef5d74572ccbfa830e9704232a90e3291aee4418bd3d"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.749085 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7cgf2" event={"ID":"87d7155d-17a2-4191-8a85-9b6277641b28","Type":"ContainerStarted","Data":"860c235aeddcaf3a5eecb68210d0678fd190f4536315ebf2429f44dafe790044"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.764691 4900 generic.go:334] "Generic (PLEG): container finished" podID="c782003f-e8d3-4aa5-aba6-0db2706d4e43" containerID="927bede8b09a88ffe05da0cdb06d343d81e901ae5b29ddaf55b87355c0f2afde" exitCode=0 Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.764883 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" event={"ID":"c782003f-e8d3-4aa5-aba6-0db2706d4e43","Type":"ContainerDied","Data":"927bede8b09a88ffe05da0cdb06d343d81e901ae5b29ddaf55b87355c0f2afde"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.770606 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.779269 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vrdh8" podStartSLOduration=125.779239134 podStartE2EDuration="2m5.779239134s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.764568595 +0000 UTC m=+147.180382446" watchObservedRunningTime="2025-12-02 13:45:01.779239134 +0000 UTC m=+147.195052985" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.804159 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.806254 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.306238458 +0000 UTC m=+147.722052309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.811976 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" event={"ID":"c9bd8a2c-57b4-40b4-b931-16496b5236a0","Type":"ContainerStarted","Data":"23246cbbfcd73ea21faa74cb42a8e6f32001109886580bcb44f4bdab963419ff"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.832661 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" event={"ID":"a72dd04d-bb06-4b3a-9f08-d68072239bd8","Type":"ContainerStarted","Data":"d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.833506 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.849493 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xtsj2" event={"ID":"725a6e19-5648-4f21-8405-1b6f29d6e9be","Type":"ContainerStarted","Data":"0a9977e3585350c9feff09452b49eeb79b67318ea352b95a3cb82f5733079565"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.874313 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" event={"ID":"67af6ca9-8ecc-4615-8d7a-670914a7d5f5","Type":"ContainerStarted","Data":"323c1668287ea219b93ddf89c4cceec99591458148523a36282f59d0f0c117ef"} Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.874417 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.888538 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wsrhj" podStartSLOduration=124.888511256 podStartE2EDuration="2m4.888511256s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.868930359 +0000 UTC m=+147.284744210" watchObservedRunningTime="2025-12-02 13:45:01.888511256 +0000 UTC m=+147.304325107" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.901093 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" podStartSLOduration=124.901062207 podStartE2EDuration="2m4.901062207s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.889483473 +0000 UTC m=+147.305297324" watchObservedRunningTime="2025-12-02 13:45:01.901062207 +0000 UTC m=+147.316876058" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.905532 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:01 crc kubenswrapper[4900]: E1202 13:45:01.906843 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.406820317 +0000 UTC m=+147.822634168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.989546 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" podStartSLOduration=125.989523987 podStartE2EDuration="2m5.989523987s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.948916823 +0000 UTC m=+147.364730674" watchObservedRunningTime="2025-12-02 13:45:01.989523987 +0000 UTC m=+147.405337838" Dec 02 13:45:01 crc kubenswrapper[4900]: I1202 13:45:01.991042 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7cgf2" podStartSLOduration=5.991034129 podStartE2EDuration="5.991034129s" podCreationTimestamp="2025-12-02 13:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:01.988358715 +0000 UTC m=+147.404172566" watchObservedRunningTime="2025-12-02 13:45:01.991034129 +0000 UTC m=+147.406847980" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.019795 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.022147 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.522125228 +0000 UTC m=+147.937939079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.029566 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:02 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:02 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:02 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.029628 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.087058 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" podStartSLOduration=126.08703731 podStartE2EDuration="2m6.08703731s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:02.057096894 +0000 UTC m=+147.472910745" watchObservedRunningTime="2025-12-02 13:45:02.08703731 +0000 UTC m=+147.502851161" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.122279 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.122599 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.622579233 +0000 UTC m=+148.038393074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.178210 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" podStartSLOduration=125.178178156 podStartE2EDuration="2m5.178178156s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:02.139065444 +0000 UTC m=+147.554879295" watchObservedRunningTime="2025-12-02 13:45:02.178178156 +0000 UTC m=+147.593992007" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.219929 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6ch9f" podStartSLOduration=126.219907771 podStartE2EDuration="2m6.219907771s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:02.17940395 +0000 UTC m=+147.595217801" watchObservedRunningTime="2025-12-02 13:45:02.219907771 +0000 UTC m=+147.635721622" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.221910 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" podStartSLOduration=126.221905537 podStartE2EDuration="2m6.221905537s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:02.219163501 +0000 UTC m=+147.634977352" watchObservedRunningTime="2025-12-02 13:45:02.221905537 +0000 UTC m=+147.637719388" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.224394 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.224790 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.724777397 +0000 UTC m=+148.140591248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.262089 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5wqjj" podStartSLOduration=126.262068389 podStartE2EDuration="2m6.262068389s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:02.259226979 +0000 UTC m=+147.675040830" watchObservedRunningTime="2025-12-02 13:45:02.262068389 +0000 UTC m=+147.677882240" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.282527 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rl4bn" podStartSLOduration=126.28250752 podStartE2EDuration="2m6.28250752s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:02.281927033 +0000 UTC m=+147.697740884" watchObservedRunningTime="2025-12-02 13:45:02.28250752 +0000 UTC m=+147.698321371" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.329986 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.330227 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.830192451 +0000 UTC m=+148.246006302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.330579 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.330987 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.830970193 +0000 UTC m=+148.246784044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.439422 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.439799 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:02.939778582 +0000 UTC m=+148.355592433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: W1202 13:45:02.528109 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-51e036271776f543f49fddfaadf7e5f5c07da64faa41d3233c0925862a112ac2 WatchSource:0}: Error finding container 51e036271776f543f49fddfaadf7e5f5c07da64faa41d3233c0925862a112ac2: Status 404 returned error can't find the container with id 51e036271776f543f49fddfaadf7e5f5c07da64faa41d3233c0925862a112ac2 Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.541213 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.541833 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.041813421 +0000 UTC m=+148.457627262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.644825 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.645356 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.145333353 +0000 UTC m=+148.561147204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.769043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.769886 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.269867431 +0000 UTC m=+148.685681282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.854750 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.855210 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.878018 4900 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4s6cc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]log ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]etcd ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/max-in-flight-filter ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 02 13:45:02 crc kubenswrapper[4900]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/project.openshift.io-projectcache ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/openshift.io-startinformers ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 02 13:45:02 crc kubenswrapper[4900]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 13:45:02 crc kubenswrapper[4900]: livez check failed Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.878100 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" podUID="52279fab-53ca-41cf-8370-bbc4821be6c2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.882189 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.882608 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.382585058 +0000 UTC m=+148.798398909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.981882 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrdh8" event={"ID":"95616fe1-4979-433d-afce-3235d5dab8a5","Type":"ContainerStarted","Data":"d43e1770098954ec58f10770eaaddc859990a053bea0077fc33c91a8f2c38e12"} Dec 02 13:45:02 crc kubenswrapper[4900]: I1202 13:45:02.985823 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:02 crc kubenswrapper[4900]: E1202 13:45:02.986298 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.486281175 +0000 UTC m=+148.902095026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.006673 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" event={"ID":"4dda8c36-2a02-4199-a2e3-33ae4a218883","Type":"ContainerStarted","Data":"20b79ff1cbe44c264a29fcfa674fd83d79f6e0aaa6b96b6378ebff20016bf14c"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.019917 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" event={"ID":"de46dffd-919a-4df1-9d52-cbf1d14b8205","Type":"ContainerStarted","Data":"959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.021289 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.023300 4900 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgtk7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.023369 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.025308 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:03 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:03 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:03 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.025338 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.026459 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n7qvb" event={"ID":"c9bd8a2c-57b4-40b4-b931-16496b5236a0","Type":"ContainerStarted","Data":"91d6b8423b724d2564625c906e9205cc84deceebb74835a7780a78d4e2183a19"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.060553 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" podStartSLOduration=126.060534548 podStartE2EDuration="2m6.060534548s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.055473137 +0000 UTC m=+148.471286978" watchObservedRunningTime="2025-12-02 13:45:03.060534548 +0000 UTC m=+148.476348399" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.087029 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.088687 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.588666404 +0000 UTC m=+149.004480255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.090884 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5wqjj" event={"ID":"c6ae55d7-1773-484c-9657-a6438f072dee","Type":"ContainerStarted","Data":"b6333aea43babcdfb615dc5d1a90360cad0d59b9eec088da2c94400b6fa03250"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.092176 4900 patch_prober.go:28] interesting pod/downloads-7954f5f757-5wqjj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.092630 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5wqjj" podUID="c6ae55d7-1773-484c-9657-a6438f072dee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.103173 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" event={"ID":"8694fceb-8a5b-41a0-8c8a-2dbca31557ca","Type":"ContainerStarted","Data":"f803a205d258669b5b3095559b679d8a930e1502c9595c17e2e102042a8f7806"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.140688 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" event={"ID":"3040cbe6-2783-43e2-9786-89fb91444b8e","Type":"ContainerStarted","Data":"779a0ea4bba94c1c892943f2dd8db1b7fa64444f25673c5a7dc6b6a627afd8a1"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.190713 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zbsck" event={"ID":"b725394f-0913-4f97-b61e-6906b21741be","Type":"ContainerStarted","Data":"81e9f326c1a78d2f277cbec2b8cadd9deacbb84267c1e38d84a001af26f12bbe"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.191215 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.193008 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.692991777 +0000 UTC m=+149.108805628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.197083 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" event={"ID":"312ac034-6fc6-4ceb-bb05-56d80e07a205","Type":"ContainerStarted","Data":"2dbf92a2a5381d587ac5fd3ea48c64d3cd2f734408842048beee6fc8ac987d19"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.211017 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kd2ql" podStartSLOduration=127.21099379 podStartE2EDuration="2m7.21099379s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.210443915 +0000 UTC m=+148.626257766" watchObservedRunningTime="2025-12-02 13:45:03.21099379 +0000 UTC m=+148.626807641" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.231034 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" podUID="535396d3-f3c8-4175-a498-526e02960674" containerName="collect-profiles" containerID="cri-o://679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab" gracePeriod=30 Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.248779 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"51e036271776f543f49fddfaadf7e5f5c07da64faa41d3233c0925862a112ac2"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.265226 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8d5sp" podStartSLOduration=127.265208334 podStartE2EDuration="2m7.265208334s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.264771142 +0000 UTC m=+148.680584993" watchObservedRunningTime="2025-12-02 13:45:03.265208334 +0000 UTC m=+148.681022185" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.295547 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.297422 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.797406284 +0000 UTC m=+149.213220135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.334298 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" podStartSLOduration=127.334275863 podStartE2EDuration="2m7.334275863s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.333267945 +0000 UTC m=+148.749081806" watchObservedRunningTime="2025-12-02 13:45:03.334275863 +0000 UTC m=+148.750089714" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.357465 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" event={"ID":"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d","Type":"ContainerStarted","Data":"2272a1b76dc462cde03dafd51b9287598bf00001a7b143c6235ebfdaf997b567"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.357638 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s7ml6"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.358733 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.367493 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.381056 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.386208 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7ml6"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.393768 4900 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sqk29 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.393830 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" podUID="67af6ca9-8ecc-4615-8d7a-670914a7d5f5" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.402261 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.402668 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:03.902654583 +0000 UTC m=+149.318468434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.422444 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" event={"ID":"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f","Type":"ContainerStarted","Data":"6df47dc807142d1d219934382af655fde636235303001a0ca9c67c7666ae073d"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.436005 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" podStartSLOduration=126.435987484 podStartE2EDuration="2m6.435987484s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.422105546 +0000 UTC m=+148.837919397" watchObservedRunningTime="2025-12-02 13:45:03.435987484 +0000 UTC m=+148.851801335" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.463179 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" podStartSLOduration=126.463158983 podStartE2EDuration="2m6.463158983s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.462093603 +0000 UTC m=+148.877907454" watchObservedRunningTime="2025-12-02 13:45:03.463158983 +0000 UTC m=+148.878972834" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.504885 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.505402 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-utilities\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.505456 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgx64\" (UniqueName: \"kubernetes.io/projected/5982b283-c40f-4ff6-9ee9-55a16f1db376-kube-api-access-qgx64\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.505489 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-catalog-content\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.506344 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.006317648 +0000 UTC m=+149.422131499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.576540 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xtsj2" event={"ID":"725a6e19-5648-4f21-8405-1b6f29d6e9be","Type":"ContainerStarted","Data":"fa08bb17b71dc17a96a7ae9ca39c3774184faa528890b29d6b7d04506c616291"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.577456 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xtsj2" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.587535 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2sjzb"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.602020 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" event={"ID":"0e8357b0-15fc-4b14-87b5-fdd058c316f3","Type":"ContainerStarted","Data":"4d6eaa1dad0751174ec90e07afac23c92fe207a7baa4bf793c54277f1f26d56e"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.602061 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" event={"ID":"d8c62ac3-afa3-4940-83b5-7f071a231367","Type":"ContainerStarted","Data":"de19b82c6ad0e4f953e16aec2ed5bcbbdf772e4bf189a27d97bba81e67c1261f"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.602439 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.607505 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sjzb"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.613011 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.623921 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" event={"ID":"ab3adbee-6d24-4396-a6a8-dfd4e5255627","Type":"ContainerStarted","Data":"7de253bbcda9decd2df142187d0bc0ecf8ad00ce595c710ad44c296077a5c2c8"} Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.626635 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.626694 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-utilities\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.626731 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgx64\" (UniqueName: \"kubernetes.io/projected/5982b283-c40f-4ff6-9ee9-55a16f1db376-kube-api-access-qgx64\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.626756 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-catalog-content\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.627158 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-catalog-content\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.627269 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-utilities\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.627544 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.127529713 +0000 UTC m=+149.543343564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.656487 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7tvc9" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.668556 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xtsj2" podStartSLOduration=7.6685412490000004 podStartE2EDuration="7.668541249s" podCreationTimestamp="2025-12-02 13:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.6159454 +0000 UTC m=+149.031759251" watchObservedRunningTime="2025-12-02 13:45:03.668541249 +0000 UTC m=+149.084355100" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.724816 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" podStartSLOduration=126.72479528 podStartE2EDuration="2m6.72479528s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.669087244 +0000 UTC m=+149.084901095" watchObservedRunningTime="2025-12-02 13:45:03.72479528 +0000 UTC m=+149.140609131" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.745292 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.745607 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-catalog-content\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.746099 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-utilities\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.746124 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26kk\" (UniqueName: \"kubernetes.io/projected/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-kube-api-access-m26kk\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.747180 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.247161433 +0000 UTC m=+149.662975284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.760820 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgx64\" (UniqueName: \"kubernetes.io/projected/5982b283-c40f-4ff6-9ee9-55a16f1db376-kube-api-access-qgx64\") pod \"certified-operators-s7ml6\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.765600 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ldz9n" podStartSLOduration=127.765583408 podStartE2EDuration="2m7.765583408s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:03.726901858 +0000 UTC m=+149.142715709" watchObservedRunningTime="2025-12-02 13:45:03.765583408 +0000 UTC m=+149.181397259" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.769360 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkxxn"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.770746 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.781731 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkxxn"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.848682 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.848765 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-utilities\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.848789 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26kk\" (UniqueName: \"kubernetes.io/projected/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-kube-api-access-m26kk\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.848826 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-catalog-content\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.849208 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-catalog-content\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.849484 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.349471151 +0000 UTC m=+149.765285002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.849832 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-utilities\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.903311 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26kk\" (UniqueName: \"kubernetes.io/projected/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-kube-api-access-m26kk\") pod \"community-operators-2sjzb\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.938262 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.946267 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d7jfg"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.947237 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.951212 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.951383 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8jh\" (UniqueName: \"kubernetes.io/projected/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-kube-api-access-2d8jh\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.951466 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-catalog-content\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.951490 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-utilities\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:03 crc kubenswrapper[4900]: E1202 13:45:03.951681 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.451628614 +0000 UTC m=+149.867442465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.980903 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7jfg"] Dec 02 13:45:03 crc kubenswrapper[4900]: I1202 13:45:03.991623 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.030974 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:04 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:04 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:04 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.031051 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063228 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063656 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-catalog-content\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063690 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-utilities\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063791 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984sg\" (UniqueName: \"kubernetes.io/projected/798784ef-f2ba-494b-a896-443aef626a69-kube-api-access-984sg\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063824 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8jh\" (UniqueName: \"kubernetes.io/projected/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-kube-api-access-2d8jh\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063851 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-utilities\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.063950 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-catalog-content\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.064543 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.564524237 +0000 UTC m=+149.980338088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.065058 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-catalog-content\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.065282 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-utilities\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.095332 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29411370-bpgg9_535396d3-f3c8-4175-a498-526e02960674/collect-profiles/0.log" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.095415 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.103107 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8jh\" (UniqueName: \"kubernetes.io/projected/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-kube-api-access-2d8jh\") pod \"certified-operators-mkxxn\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.124530 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.166253 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.166481 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984sg\" (UniqueName: \"kubernetes.io/projected/798784ef-f2ba-494b-a896-443aef626a69-kube-api-access-984sg\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.166510 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-utilities\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.166550 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-catalog-content\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.166974 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-catalog-content\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.167060 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.66704121 +0000 UTC m=+150.082855061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.167570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-utilities\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.203605 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984sg\" (UniqueName: \"kubernetes.io/projected/798784ef-f2ba-494b-a896-443aef626a69-kube-api-access-984sg\") pod \"community-operators-d7jfg\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.267967 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume\") pod \"535396d3-f3c8-4175-a498-526e02960674\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.268125 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7k7\" (UniqueName: \"kubernetes.io/projected/535396d3-f3c8-4175-a498-526e02960674-kube-api-access-gc7k7\") pod \"535396d3-f3c8-4175-a498-526e02960674\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.268206 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/535396d3-f3c8-4175-a498-526e02960674-secret-volume\") pod \"535396d3-f3c8-4175-a498-526e02960674\" (UID: \"535396d3-f3c8-4175-a498-526e02960674\") " Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.268378 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.268816 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.768799522 +0000 UTC m=+150.184613373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.269626 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume" (OuterVolumeSpecName: "config-volume") pod "535396d3-f3c8-4175-a498-526e02960674" (UID: "535396d3-f3c8-4175-a498-526e02960674"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.284493 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535396d3-f3c8-4175-a498-526e02960674-kube-api-access-gc7k7" (OuterVolumeSpecName: "kube-api-access-gc7k7") pod "535396d3-f3c8-4175-a498-526e02960674" (UID: "535396d3-f3c8-4175-a498-526e02960674"). InnerVolumeSpecName "kube-api-access-gc7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.292485 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535396d3-f3c8-4175-a498-526e02960674-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "535396d3-f3c8-4175-a498-526e02960674" (UID: "535396d3-f3c8-4175-a498-526e02960674"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.353945 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.369283 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.369875 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535396d3-f3c8-4175-a498-526e02960674-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.369888 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7k7\" (UniqueName: \"kubernetes.io/projected/535396d3-f3c8-4175-a498-526e02960674-kube-api-access-gc7k7\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.369902 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/535396d3-f3c8-4175-a498-526e02960674-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.369978 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.869956747 +0000 UTC m=+150.285770588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.459389 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tf9z8" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.472537 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.472965 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:04.972950223 +0000 UTC m=+150.388764074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.585317 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.585672 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.08563645 +0000 UTC m=+150.501450301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.682747 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0555671732cb331a6b88ffadad50bda18ec47dc95a981541e5a13fb7b74a373f"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.682801 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0d52a5cf87e169e114c40465f256d8fdbe34fd775a2a3703ec9cbda4b8689f41"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.686433 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.686852 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.186836996 +0000 UTC m=+150.602650837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.712099 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" event={"ID":"c782003f-e8d3-4aa5-aba6-0db2706d4e43","Type":"ContainerStarted","Data":"fae70eb35b7aa761a3fa728bf364402e1b5e11ca64b35e0facacf8bf0b2b4485"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.712763 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.715315 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" event={"ID":"b3edd4c5-0d86-4f99-bb00-e9b134cda502","Type":"ContainerStarted","Data":"55adccef5f9579249d45b55e81fa5338516cb4ec987217d2eda1adad74f674bc"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.715335 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" event={"ID":"b3edd4c5-0d86-4f99-bb00-e9b134cda502","Type":"ContainerStarted","Data":"9374023c76dae9bf0f36852938104f4e10ce5c8ceb69e35313b1b886655bcfb2"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.729344 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" event={"ID":"ab3adbee-6d24-4396-a6a8-dfd4e5255627","Type":"ContainerStarted","Data":"9d677f90256ac205e72e1e5887d2d40e75a81f8b94c09a03d714a476ccc07ba8"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.729388 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" event={"ID":"ab3adbee-6d24-4396-a6a8-dfd4e5255627","Type":"ContainerStarted","Data":"b7b8f89a1b13e2aff6dd0e1246c3432c6094f5c053f6322f75322acd249821e6"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.730047 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.750964 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29411370-bpgg9_535396d3-f3c8-4175-a498-526e02960674/collect-profiles/0.log" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.751239 4900 generic.go:334] "Generic (PLEG): container finished" podID="535396d3-f3c8-4175-a498-526e02960674" containerID="679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab" exitCode=2 Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.751324 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" event={"ID":"535396d3-f3c8-4175-a498-526e02960674","Type":"ContainerDied","Data":"679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.751353 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" event={"ID":"535396d3-f3c8-4175-a498-526e02960674","Type":"ContainerDied","Data":"4f9ed5ba15fed1f63a27ef5d74572ccbfa830e9704232a90e3291aee4418bd3d"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.751369 4900 scope.go:117] "RemoveContainer" containerID="679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.751506 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.788930 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.790363 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.290343117 +0000 UTC m=+150.706156968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.799660 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7ml6"] Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.804571 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" event={"ID":"0afb9a31-c9fa-465a-9b2e-856ec706f5aa","Type":"ContainerStarted","Data":"5df2f78530676bf0e3784b327da82c49b5e27a635e77b0caa0d3d2a044f13cfb"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.812275 4900 scope.go:117] "RemoveContainer" containerID="679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.822399 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab\": container with ID starting with 679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab not found: ID does not exist" containerID="679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.822444 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab"} err="failed to get container status \"679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab\": rpc error: code = NotFound desc = could not find container \"679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab\": container with ID starting with 679c0112bb4480316938d917cb94ea57d41c54ffbc35e1f2d60f4718d77f3dab not found: ID does not exist" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.842624 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6pj4t" podStartSLOduration=127.842608387 podStartE2EDuration="2m7.842608387s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:04.834952743 +0000 UTC m=+150.250766594" watchObservedRunningTime="2025-12-02 13:45:04.842608387 +0000 UTC m=+150.258422238" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.849917 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-skddd" event={"ID":"0e8357b0-15fc-4b14-87b5-fdd058c316f3","Type":"ContainerStarted","Data":"3e41b5cd8d099b24e348d01ceff6d8f2f6f9dc3bd91e1f0dae4d24a0d6417703"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.877225 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkxxn"] Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.891325 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:04 crc kubenswrapper[4900]: E1202 13:45:04.892130 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.392107659 +0000 UTC m=+150.807921510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.893872 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" event={"ID":"ca3a2685-a14d-4ffe-8f76-55de65b5841b","Type":"ContainerStarted","Data":"2d3467716e781ecdef12055981f2440fd14b7527922b82f35bdf9d6b8a0b8896"} Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.913502 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" podStartSLOduration=128.913482536 podStartE2EDuration="2m8.913482536s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:04.91290747 +0000 UTC m=+150.328721321" watchObservedRunningTime="2025-12-02 13:45:04.913482536 +0000 UTC m=+150.329296387" Dec 02 13:45:04 crc kubenswrapper[4900]: I1202 13:45:04.972664 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" event={"ID":"8694fceb-8a5b-41a0-8c8a-2dbca31557ca","Type":"ContainerStarted","Data":"64ab29990787ef15b1e31f7b50b444685fad488c5354f36b0d04d47d3284e09a"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:04.994660 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:04.995588 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.495546488 +0000 UTC m=+150.911360339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.003599 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" event={"ID":"3df749bb-0f54-4f5b-b9b9-cf46babaf698","Type":"ContainerStarted","Data":"4a05554471bb2527a6c229af0ca912f3f1bf8610ce7065cc0c91f3febc090a6d"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.020905 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-srz7l" event={"ID":"55e32120-ccd5-47d7-b0d7-2ca8ddb9d03d","Type":"ContainerStarted","Data":"7be9e044896308dd209518e566ca529b4ac32c64738f470c8e49c2424f1728dc"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.030097 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" podStartSLOduration=128.030078402 podStartE2EDuration="2m8.030078402s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:04.999606911 +0000 UTC m=+150.415420762" watchObservedRunningTime="2025-12-02 13:45:05.030078402 +0000 UTC m=+150.445892243" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.031023 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" podStartSLOduration=5.031018179 podStartE2EDuration="5.031018179s" podCreationTimestamp="2025-12-02 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:05.028734205 +0000 UTC m=+150.444548056" watchObservedRunningTime="2025-12-02 13:45:05.031018179 +0000 UTC m=+150.446832030" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.036883 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:05 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:05 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:05 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.036955 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.069929 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xtsj2" event={"ID":"725a6e19-5648-4f21-8405-1b6f29d6e9be","Type":"ContainerStarted","Data":"f1316b395a4e1119765f75da5c64c017cfbe95178fb96507649c455660c04266"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.097046 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.099018 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.599000597 +0000 UTC m=+151.014814458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.107749 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.108674 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2sjzb"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.131595 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411370-bpgg9"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.154073 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b5d5000773ceec6f6a97032dd23cc093b87eaafe705d7a150811b819526fb88a"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.155284 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"509712a53e945ef1337eec452b3be3c24889524ef6100598c4658859d673a582"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.156259 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.159333 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" podStartSLOduration=128.159304951 podStartE2EDuration="2m8.159304951s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:05.141049552 +0000 UTC m=+150.556863403" watchObservedRunningTime="2025-12-02 13:45:05.159304951 +0000 UTC m=+150.575118802" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.198236 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.199875 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.699845554 +0000 UTC m=+151.115659405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.202542 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qsd4r" podStartSLOduration=129.202517558 podStartE2EDuration="2m9.202517558s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:05.188888958 +0000 UTC m=+150.604702799" watchObservedRunningTime="2025-12-02 13:45:05.202517558 +0000 UTC m=+150.618331399" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.204006 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c810c01a5bc7b0f10580f5ca50fc43d069c2714395ce697e817555c89c80e6ae"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.211266 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" event={"ID":"67af6ca9-8ecc-4615-8d7a-670914a7d5f5","Type":"ContainerStarted","Data":"34e0e08d73919e8a91d6ffe63239324b28298414ad906bd4003b2d12c178165f"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.213841 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" event={"ID":"f3ccbfeb-97c6-4bca-a7ea-bd151c32c06f","Type":"ContainerStarted","Data":"80a6cb6da884171f62e46a113efb0c30b03b64e02f146dfb59fee185df97f2f3"} Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.220155 4900 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgtk7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.220906 4900 patch_prober.go:28] interesting pod/downloads-7954f5f757-5wqjj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.220946 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5wqjj" podUID="c6ae55d7-1773-484c-9657-a6438f072dee" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.221049 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.227043 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sqk29" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.249015 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7jfg"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.300400 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.301890 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.801846552 +0000 UTC m=+151.217660403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.345191 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4zkbx"] Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.355414 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535396d3-f3c8-4175-a498-526e02960674" containerName="collect-profiles" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.356235 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="535396d3-f3c8-4175-a498-526e02960674" containerName="collect-profiles" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.356356 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="535396d3-f3c8-4175-a498-526e02960674" containerName="collect-profiles" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.357178 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zkbx"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.357280 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.359955 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.402602 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.403004 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:05.902972247 +0000 UTC m=+151.318786098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.497138 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bkwqf" podStartSLOduration=129.497116296 podStartE2EDuration="2m9.497116296s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:05.490343737 +0000 UTC m=+150.906157588" watchObservedRunningTime="2025-12-02 13:45:05.497116296 +0000 UTC m=+150.912930147" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.505136 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.505215 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-catalog-content\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.505235 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-utilities\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.505264 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsh4r\" (UniqueName: \"kubernetes.io/projected/3dc4aaac-9b9e-42e6-b943-25236645d1b2-kube-api-access-wsh4r\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.505578 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:06.005548921 +0000 UTC m=+151.421362772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.607262 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.607562 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-catalog-content\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.607597 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-utilities\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.607627 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsh4r\" (UniqueName: \"kubernetes.io/projected/3dc4aaac-9b9e-42e6-b943-25236645d1b2-kube-api-access-wsh4r\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.608692 4900 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.609182 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-catalog-content\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.609246 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-utilities\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.609482 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:06.109454923 +0000 UTC m=+151.525268774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.640083 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsh4r\" (UniqueName: \"kubernetes.io/projected/3dc4aaac-9b9e-42e6-b943-25236645d1b2-kube-api-access-wsh4r\") pod \"redhat-marketplace-4zkbx\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.709197 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.709828 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:06.209805666 +0000 UTC m=+151.625619517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.736392 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.742326 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-dtq92" podStartSLOduration=128.742305413 podStartE2EDuration="2m8.742305413s" podCreationTimestamp="2025-12-02 13:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:05.614176965 +0000 UTC m=+151.029990816" watchObservedRunningTime="2025-12-02 13:45:05.742305413 +0000 UTC m=+151.158119264" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.743073 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2xg"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.744154 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.776595 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2xg"] Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.818232 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.818618 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:06.318596794 +0000 UTC m=+151.734410645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.921549 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.921616 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-catalog-content\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.921657 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmjj\" (UniqueName: \"kubernetes.io/projected/d5f5c196-c4c3-467a-882c-e8a39aabbede-kube-api-access-ksmjj\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:05 crc kubenswrapper[4900]: I1202 13:45:05.921689 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-utilities\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:05 crc kubenswrapper[4900]: E1202 13:45:05.922038 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:45:06.422024213 +0000 UTC m=+151.837838064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xffzf" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.024457 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.024738 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-catalog-content\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.024770 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmjj\" (UniqueName: \"kubernetes.io/projected/d5f5c196-c4c3-467a-882c-e8a39aabbede-kube-api-access-ksmjj\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.024797 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-utilities\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.025771 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-utilities\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: E1202 13:45:06.025859 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:06.525839942 +0000 UTC m=+151.941653793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.026057 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-catalog-content\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.028947 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:06 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:06 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:06 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.028989 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.073538 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmjj\" (UniqueName: \"kubernetes.io/projected/d5f5c196-c4c3-467a-882c-e8a39aabbede-kube-api-access-ksmjj\") pod \"redhat-marketplace-tw2xg\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.076294 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zkbx"] Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.096994 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.107905 4900 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-02T13:45:05.608722443Z","Handler":null,"Name":""} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.122274 4900 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.122311 4900 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.125694 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.129631 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.129697 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.158707 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xffzf\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.231888 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.242775 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zkbx" event={"ID":"3dc4aaac-9b9e-42e6-b943-25236645d1b2","Type":"ContainerStarted","Data":"88e5088246db4f297f05cf9aa3f4d7e62c98ce07d883e362aa81760be287671d"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.248222 4900 generic.go:334] "Generic (PLEG): container finished" podID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerID="7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84" exitCode=0 Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.248272 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerDied","Data":"7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.248288 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerStarted","Data":"a76989282269731c1329e73acd66e3e3b779c29784e1301b6634a816e1ccfb2b"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.251321 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.258571 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" event={"ID":"4dda8c36-2a02-4199-a2e3-33ae4a218883","Type":"ContainerStarted","Data":"476e966a13cd4cc5b56dcc5c0b5d696b1376050b85db79f3c8834557fe3eb1a0"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.259054 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" event={"ID":"4dda8c36-2a02-4199-a2e3-33ae4a218883","Type":"ContainerStarted","Data":"873d5777480b0cc2f3e474ff41646ef9c37944243c45805d2d2795a688a05017"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.260324 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.265137 4900 generic.go:334] "Generic (PLEG): container finished" podID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerID="506ab442b1ae273d4a05aa637d76b8238f124113532966eea1dee11ea8f4e72d" exitCode=0 Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.265201 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerDied","Data":"506ab442b1ae273d4a05aa637d76b8238f124113532966eea1dee11ea8f4e72d"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.265224 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerStarted","Data":"e58a49d60817437d912cf5e9fc46f70f2792d99be2abb0d02a2ec93deda8d5a9"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.269313 4900 generic.go:334] "Generic (PLEG): container finished" podID="798784ef-f2ba-494b-a896-443aef626a69" containerID="4fd5c1af7da3393801cf9a92e939a4c39213f7a0a046e2c0b8af7f60bcefed58" exitCode=0 Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.269372 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerDied","Data":"4fd5c1af7da3393801cf9a92e939a4c39213f7a0a046e2c0b8af7f60bcefed58"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.269390 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerStarted","Data":"59772b68801801c779a6cfd362899e6447bc9aad26df4ab57e3694b65a57588a"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.287025 4900 generic.go:334] "Generic (PLEG): container finished" podID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerID="afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221" exitCode=0 Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.288576 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerDied","Data":"afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.288605 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerStarted","Data":"dfd29f995880190052f8296d1017cdef77fcc3eadb87632577ebdc2eb516ce87"} Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.296270 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.296495 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.417859 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2xg"] Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.430900 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.652385 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xffzf"] Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.730177 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvppg"] Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.731507 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.736135 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.744466 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvppg"] Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.841392 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-catalog-content\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.841528 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-utilities\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.841572 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7prj\" (UniqueName: \"kubernetes.io/projected/60614c3a-d991-4156-9d83-55ab06706291-kube-api-access-g7prj\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.922535 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535396d3-f3c8-4175-a498-526e02960674" path="/var/lib/kubelet/pods/535396d3-f3c8-4175-a498-526e02960674/volumes" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.924917 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.942917 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7prj\" (UniqueName: \"kubernetes.io/projected/60614c3a-d991-4156-9d83-55ab06706291-kube-api-access-g7prj\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.942976 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-catalog-content\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.943045 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-utilities\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.943488 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-utilities\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.943688 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-catalog-content\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:06 crc kubenswrapper[4900]: I1202 13:45:06.969211 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7prj\" (UniqueName: \"kubernetes.io/projected/60614c3a-d991-4156-9d83-55ab06706291-kube-api-access-g7prj\") pod \"redhat-operators-rvppg\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.025366 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:07 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:07 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:07 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.025435 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.074384 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.133027 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vs7wd"] Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.134298 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.148002 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vs7wd"] Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.247212 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-utilities\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.247319 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrq79\" (UniqueName: \"kubernetes.io/projected/3353d0f3-48b8-4b6a-bb09-d19a523098b0-kube-api-access-jrq79\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.247392 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-catalog-content\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.297275 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" event={"ID":"4dda8c36-2a02-4199-a2e3-33ae4a218883","Type":"ContainerStarted","Data":"c3280a8e5c16b0c6d92c3662e3df455e02aac1c1b928acdc9194d3a1f20d0cee"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.299633 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerID="ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d" exitCode=0 Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.299790 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2xg" event={"ID":"d5f5c196-c4c3-467a-882c-e8a39aabbede","Type":"ContainerDied","Data":"ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.299821 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2xg" event={"ID":"d5f5c196-c4c3-467a-882c-e8a39aabbede","Type":"ContainerStarted","Data":"defaa42d3b4660938dc62c1f9eb165a5712cdfe61aa0a33fe8f48b1140bf527b"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.302002 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" event={"ID":"ff406d69-c78d-478d-947c-c1b9ae6ae503","Type":"ContainerStarted","Data":"d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.302059 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" event={"ID":"ff406d69-c78d-478d-947c-c1b9ae6ae503","Type":"ContainerStarted","Data":"765c375ee6d31cb96cda5b07c555f65bb70d0a9fce86b48f6a1bf577a33b48a9"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.302099 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.305675 4900 generic.go:334] "Generic (PLEG): container finished" podID="8694fceb-8a5b-41a0-8c8a-2dbca31557ca" containerID="64ab29990787ef15b1e31f7b50b444685fad488c5354f36b0d04d47d3284e09a" exitCode=0 Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.305723 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" event={"ID":"8694fceb-8a5b-41a0-8c8a-2dbca31557ca","Type":"ContainerDied","Data":"64ab29990787ef15b1e31f7b50b444685fad488c5354f36b0d04d47d3284e09a"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.309257 4900 generic.go:334] "Generic (PLEG): container finished" podID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerID="188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16" exitCode=0 Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.311566 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zkbx" event={"ID":"3dc4aaac-9b9e-42e6-b943-25236645d1b2","Type":"ContainerDied","Data":"188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16"} Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.322495 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-77dgs" podStartSLOduration=11.322477304 podStartE2EDuration="11.322477304s" podCreationTimestamp="2025-12-02 13:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:07.321074375 +0000 UTC m=+152.736888226" watchObservedRunningTime="2025-12-02 13:45:07.322477304 +0000 UTC m=+152.738291155" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.332824 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvppg"] Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.348945 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrq79\" (UniqueName: \"kubernetes.io/projected/3353d0f3-48b8-4b6a-bb09-d19a523098b0-kube-api-access-jrq79\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.349008 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-catalog-content\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.349050 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-utilities\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.349570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-utilities\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.349604 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-catalog-content\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: W1202 13:45:07.353002 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60614c3a_d991_4156_9d83_55ab06706291.slice/crio-f6bcff97c2aa8d3842e398cdcba50aa591eb8924cb0437692e7b6711ecadc733 WatchSource:0}: Error finding container f6bcff97c2aa8d3842e398cdcba50aa591eb8924cb0437692e7b6711ecadc733: Status 404 returned error can't find the container with id f6bcff97c2aa8d3842e398cdcba50aa591eb8924cb0437692e7b6711ecadc733 Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.369577 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrq79\" (UniqueName: \"kubernetes.io/projected/3353d0f3-48b8-4b6a-bb09-d19a523098b0-kube-api-access-jrq79\") pod \"redhat-operators-vs7wd\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.375899 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" podStartSLOduration=131.375886625 podStartE2EDuration="2m11.375886625s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:07.361435331 +0000 UTC m=+152.777249182" watchObservedRunningTime="2025-12-02 13:45:07.375886625 +0000 UTC m=+152.791700476" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.458441 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.690525 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vs7wd"] Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.861260 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:45:07 crc kubenswrapper[4900]: I1202 13:45:07.869923 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4s6cc" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.025848 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:08 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:08 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:08 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.026115 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.330167 4900 generic.go:334] "Generic (PLEG): container finished" podID="60614c3a-d991-4156-9d83-55ab06706291" containerID="172bd9e8db11f6f4c4094cecbc79bacf707a36d0a71325c7fc7f16d9ecb7ec07" exitCode=0 Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.330265 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerDied","Data":"172bd9e8db11f6f4c4094cecbc79bacf707a36d0a71325c7fc7f16d9ecb7ec07"} Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.330311 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerStarted","Data":"f6bcff97c2aa8d3842e398cdcba50aa591eb8924cb0437692e7b6711ecadc733"} Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.362442 4900 generic.go:334] "Generic (PLEG): container finished" podID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerID="17b7c4ee107548049890f1eb0a6eeb760fc87c76b827265c2a03c32dfa3579fe" exitCode=0 Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.362530 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerDied","Data":"17b7c4ee107548049890f1eb0a6eeb760fc87c76b827265c2a03c32dfa3579fe"} Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.362610 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerStarted","Data":"07df76bb23142518c483577a571664ef20cd8de0b0f5382f2e9aa33b3201e34c"} Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.771034 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.905852 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7wvz\" (UniqueName: \"kubernetes.io/projected/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-kube-api-access-m7wvz\") pod \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.905976 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-config-volume\") pod \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.906014 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-secret-volume\") pod \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\" (UID: \"8694fceb-8a5b-41a0-8c8a-2dbca31557ca\") " Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.906964 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "8694fceb-8a5b-41a0-8c8a-2dbca31557ca" (UID: "8694fceb-8a5b-41a0-8c8a-2dbca31557ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.912388 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8694fceb-8a5b-41a0-8c8a-2dbca31557ca" (UID: "8694fceb-8a5b-41a0-8c8a-2dbca31557ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.941602 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-kube-api-access-m7wvz" (OuterVolumeSpecName: "kube-api-access-m7wvz") pod "8694fceb-8a5b-41a0-8c8a-2dbca31557ca" (UID: "8694fceb-8a5b-41a0-8c8a-2dbca31557ca"). InnerVolumeSpecName "kube-api-access-m7wvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.996598 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:45:08 crc kubenswrapper[4900]: I1202 13:45:08.996661 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.009794 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.010123 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.010153 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.010166 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7wvz\" (UniqueName: \"kubernetes.io/projected/8694fceb-8a5b-41a0-8c8a-2dbca31557ca-kube-api-access-m7wvz\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.021542 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.027303 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:09 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:09 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:09 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.027392 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.266852 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.267740 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.270588 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5wqjj" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.276897 4900 patch_prober.go:28] interesting pod/console-f9d7485db-vrdh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.276961 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vrdh8" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.404704 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" event={"ID":"8694fceb-8a5b-41a0-8c8a-2dbca31557ca","Type":"ContainerDied","Data":"f803a205d258669b5b3095559b679d8a930e1502c9595c17e2e102042a8f7806"} Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.404774 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f803a205d258669b5b3095559b679d8a930e1502c9595c17e2e102042a8f7806" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.404869 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.420875 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xms5s" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.994194 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 13:45:09 crc kubenswrapper[4900]: E1202 13:45:09.995375 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8694fceb-8a5b-41a0-8c8a-2dbca31557ca" containerName="collect-profiles" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.995403 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8694fceb-8a5b-41a0-8c8a-2dbca31557ca" containerName="collect-profiles" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.995577 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8694fceb-8a5b-41a0-8c8a-2dbca31557ca" containerName="collect-profiles" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.996095 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.998490 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 02 13:45:09 crc kubenswrapper[4900]: I1202 13:45:09.998883 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.007770 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.022783 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:10 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:10 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:10 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.022836 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.137545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.137606 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.238973 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.239040 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.239117 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.264596 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.312385 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:10 crc kubenswrapper[4900]: I1202 13:45:10.833023 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.029679 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:11 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:11 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:11 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.032846 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.135400 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.136198 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.139688 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.143473 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.190026 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.192302 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xtsj2" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.287751 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574f0e39-03f2-420f-850c-a6c99574cd24-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.287844 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574f0e39-03f2-420f-850c-a6c99574cd24-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.390584 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574f0e39-03f2-420f-850c-a6c99574cd24-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.390658 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574f0e39-03f2-420f-850c-a6c99574cd24-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.390827 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574f0e39-03f2-420f-850c-a6c99574cd24-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.410171 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574f0e39-03f2-420f-850c-a6c99574cd24-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.469679 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ee4e4fef-dc84-4c16-8725-4fe572a3162e","Type":"ContainerStarted","Data":"1f3d269e31aaffd88699a322eb2cc83bc0cf4cab12039ae7077d53a922649a8a"} Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.517851 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:11 crc kubenswrapper[4900]: I1202 13:45:11.886773 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 13:45:11 crc kubenswrapper[4900]: W1202 13:45:11.914800 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod574f0e39_03f2_420f_850c_a6c99574cd24.slice/crio-4cb08011a35ce3e969ef59c1a49765c28786903241950444e7b50a782e0a7cfb WatchSource:0}: Error finding container 4cb08011a35ce3e969ef59c1a49765c28786903241950444e7b50a782e0a7cfb: Status 404 returned error can't find the container with id 4cb08011a35ce3e969ef59c1a49765c28786903241950444e7b50a782e0a7cfb Dec 02 13:45:12 crc kubenswrapper[4900]: I1202 13:45:12.023972 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:12 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:12 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:12 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:12 crc kubenswrapper[4900]: I1202 13:45:12.024415 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:12 crc kubenswrapper[4900]: I1202 13:45:12.484001 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ee4e4fef-dc84-4c16-8725-4fe572a3162e","Type":"ContainerStarted","Data":"d4ec967503fb3a86c220b894f78bdb5aa9ec603e712aef4122033eb2391fcd3c"} Dec 02 13:45:12 crc kubenswrapper[4900]: I1202 13:45:12.499546 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.499528937 podStartE2EDuration="3.499528937s" podCreationTimestamp="2025-12-02 13:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:12.49608547 +0000 UTC m=+157.911899321" watchObservedRunningTime="2025-12-02 13:45:12.499528937 +0000 UTC m=+157.915342778" Dec 02 13:45:12 crc kubenswrapper[4900]: I1202 13:45:12.501968 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"574f0e39-03f2-420f-850c-a6c99574cd24","Type":"ContainerStarted","Data":"4cb08011a35ce3e969ef59c1a49765c28786903241950444e7b50a782e0a7cfb"} Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.023167 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:13 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:13 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:13 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.023241 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.092957 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.535177 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"574f0e39-03f2-420f-850c-a6c99574cd24","Type":"ContainerStarted","Data":"d67cb35fc33379d395a7e1e8101526e113e4e845757d95dd71c65caa75ab20c8"} Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.557754 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.5577252 podStartE2EDuration="2.5577252s" podCreationTimestamp="2025-12-02 13:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:13.556407003 +0000 UTC m=+158.972220854" watchObservedRunningTime="2025-12-02 13:45:13.5577252 +0000 UTC m=+158.973539041" Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.559086 4900 generic.go:334] "Generic (PLEG): container finished" podID="ee4e4fef-dc84-4c16-8725-4fe572a3162e" containerID="d4ec967503fb3a86c220b894f78bdb5aa9ec603e712aef4122033eb2391fcd3c" exitCode=0 Dec 02 13:45:13 crc kubenswrapper[4900]: I1202 13:45:13.559146 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ee4e4fef-dc84-4c16-8725-4fe572a3162e","Type":"ContainerDied","Data":"d4ec967503fb3a86c220b894f78bdb5aa9ec603e712aef4122033eb2391fcd3c"} Dec 02 13:45:14 crc kubenswrapper[4900]: I1202 13:45:14.024481 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:14 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:14 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:14 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:14 crc kubenswrapper[4900]: I1202 13:45:14.024555 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:14 crc kubenswrapper[4900]: I1202 13:45:14.578273 4900 generic.go:334] "Generic (PLEG): container finished" podID="574f0e39-03f2-420f-850c-a6c99574cd24" containerID="d67cb35fc33379d395a7e1e8101526e113e4e845757d95dd71c65caa75ab20c8" exitCode=0 Dec 02 13:45:14 crc kubenswrapper[4900]: I1202 13:45:14.578402 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"574f0e39-03f2-420f-850c-a6c99574cd24","Type":"ContainerDied","Data":"d67cb35fc33379d395a7e1e8101526e113e4e845757d95dd71c65caa75ab20c8"} Dec 02 13:45:15 crc kubenswrapper[4900]: I1202 13:45:15.023675 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:15 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:15 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:15 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:15 crc kubenswrapper[4900]: I1202 13:45:15.023788 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:15 crc kubenswrapper[4900]: I1202 13:45:15.116826 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:45:15 crc kubenswrapper[4900]: I1202 13:45:15.116906 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:45:16 crc kubenswrapper[4900]: I1202 13:45:16.023131 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:16 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:16 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:16 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:16 crc kubenswrapper[4900]: I1202 13:45:16.023419 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:17 crc kubenswrapper[4900]: I1202 13:45:17.024236 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:17 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:17 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:17 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:17 crc kubenswrapper[4900]: I1202 13:45:17.024307 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.025005 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:18 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:18 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:18 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.025083 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.721055 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.799891 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kube-api-access\") pod \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.806105 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee4e4fef-dc84-4c16-8725-4fe572a3162e" (UID: "ee4e4fef-dc84-4c16-8725-4fe572a3162e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.900896 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kubelet-dir\") pod \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\" (UID: \"ee4e4fef-dc84-4c16-8725-4fe572a3162e\") " Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.901139 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:18 crc kubenswrapper[4900]: I1202 13:45:18.901337 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee4e4fef-dc84-4c16-8725-4fe572a3162e" (UID: "ee4e4fef-dc84-4c16-8725-4fe572a3162e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.002895 4900 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee4e4fef-dc84-4c16-8725-4fe572a3162e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.023464 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:19 crc kubenswrapper[4900]: [-]has-synced failed: reason withheld Dec 02 13:45:19 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:19 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.023556 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.256413 4900 patch_prober.go:28] interesting pod/console-f9d7485db-vrdh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.256479 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vrdh8" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.611002 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ee4e4fef-dc84-4c16-8725-4fe572a3162e","Type":"ContainerDied","Data":"1f3d269e31aaffd88699a322eb2cc83bc0cf4cab12039ae7077d53a922649a8a"} Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.611617 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3d269e31aaffd88699a322eb2cc83bc0cf4cab12039ae7077d53a922649a8a" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.611060 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.611897 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.616678 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c63b5f6-db87-48a2-b87e-5442db707843-metrics-certs\") pod \"network-metrics-daemon-kzhwn\" (UID: \"1c63b5f6-db87-48a2-b87e-5442db707843\") " pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.689225 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.748440 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kzhwn" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.814841 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574f0e39-03f2-420f-850c-a6c99574cd24-kubelet-dir\") pod \"574f0e39-03f2-420f-850c-a6c99574cd24\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.814981 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/574f0e39-03f2-420f-850c-a6c99574cd24-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "574f0e39-03f2-420f-850c-a6c99574cd24" (UID: "574f0e39-03f2-420f-850c-a6c99574cd24"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.815038 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574f0e39-03f2-420f-850c-a6c99574cd24-kube-api-access\") pod \"574f0e39-03f2-420f-850c-a6c99574cd24\" (UID: \"574f0e39-03f2-420f-850c-a6c99574cd24\") " Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.818425 4900 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574f0e39-03f2-420f-850c-a6c99574cd24-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.820480 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574f0e39-03f2-420f-850c-a6c99574cd24-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "574f0e39-03f2-420f-850c-a6c99574cd24" (UID: "574f0e39-03f2-420f-850c-a6c99574cd24"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:45:19 crc kubenswrapper[4900]: I1202 13:45:19.920778 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574f0e39-03f2-420f-850c-a6c99574cd24-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:20 crc kubenswrapper[4900]: I1202 13:45:20.028255 4900 patch_prober.go:28] interesting pod/router-default-5444994796-5z2g6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:45:20 crc kubenswrapper[4900]: [+]has-synced ok Dec 02 13:45:20 crc kubenswrapper[4900]: [+]process-running ok Dec 02 13:45:20 crc kubenswrapper[4900]: healthz check failed Dec 02 13:45:20 crc kubenswrapper[4900]: I1202 13:45:20.029028 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5z2g6" podUID="629226c0-c1d6-4d74-a041-7eb24832256f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:45:20 crc kubenswrapper[4900]: I1202 13:45:20.621943 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"574f0e39-03f2-420f-850c-a6c99574cd24","Type":"ContainerDied","Data":"4cb08011a35ce3e969ef59c1a49765c28786903241950444e7b50a782e0a7cfb"} Dec 02 13:45:20 crc kubenswrapper[4900]: I1202 13:45:20.622009 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb08011a35ce3e969ef59c1a49765c28786903241950444e7b50a782e0a7cfb" Dec 02 13:45:20 crc kubenswrapper[4900]: I1202 13:45:20.622016 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:45:21 crc kubenswrapper[4900]: I1202 13:45:21.023853 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:45:21 crc kubenswrapper[4900]: I1202 13:45:21.026799 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5z2g6" Dec 02 13:45:26 crc kubenswrapper[4900]: I1202 13:45:26.438611 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:45:31 crc kubenswrapper[4900]: I1202 13:45:31.190226 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:45:31 crc kubenswrapper[4900]: I1202 13:45:31.198679 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:45:39 crc kubenswrapper[4900]: I1202 13:45:39.351558 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5tmt" Dec 02 13:45:41 crc kubenswrapper[4900]: I1202 13:45:41.401547 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:42 crc kubenswrapper[4900]: E1202 13:45:42.562241 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 13:45:42 crc kubenswrapper[4900]: E1202 13:45:42.562636 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m26kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2sjzb_openshift-marketplace(13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:45:42 crc kubenswrapper[4900]: E1202 13:45:42.564171 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2sjzb" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" Dec 02 13:45:42 crc kubenswrapper[4900]: E1202 13:45:42.595710 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 13:45:42 crc kubenswrapper[4900]: E1202 13:45:42.595951 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-984sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d7jfg_openshift-marketplace(798784ef-f2ba-494b-a896-443aef626a69): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:45:42 crc kubenswrapper[4900]: E1202 13:45:42.598930 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d7jfg" podUID="798784ef-f2ba-494b-a896-443aef626a69" Dec 02 13:45:43 crc kubenswrapper[4900]: E1202 13:45:43.958269 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2sjzb" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" Dec 02 13:45:43 crc kubenswrapper[4900]: E1202 13:45:43.958249 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d7jfg" podUID="798784ef-f2ba-494b-a896-443aef626a69" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.253969 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.255055 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksmjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tw2xg_openshift-marketplace(d5f5c196-c4c3-467a-882c-e8a39aabbede): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.256452 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tw2xg" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.269655 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.269899 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wsh4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4zkbx_openshift-marketplace(3dc4aaac-9b9e-42e6-b943-25236645d1b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.271193 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4zkbx" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.382802 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.383445 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7prj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rvppg_openshift-marketplace(60614c3a-d991-4156-9d83-55ab06706291): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.384771 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rvppg" podUID="60614c3a-d991-4156-9d83-55ab06706291" Dec 02 13:45:44 crc kubenswrapper[4900]: I1202 13:45:44.443415 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kzhwn"] Dec 02 13:45:44 crc kubenswrapper[4900]: I1202 13:45:44.801683 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerStarted","Data":"c6d2e6693392fe607ab060038f4d2614a77a7fab1f08002973f29111372c3764"} Dec 02 13:45:44 crc kubenswrapper[4900]: I1202 13:45:44.805278 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerStarted","Data":"b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf"} Dec 02 13:45:44 crc kubenswrapper[4900]: I1202 13:45:44.808133 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerStarted","Data":"412114cda5bc5f181b0268dcad24703556ff08baf0082aca8218126632c6aa73"} Dec 02 13:45:44 crc kubenswrapper[4900]: I1202 13:45:44.810557 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" event={"ID":"1c63b5f6-db87-48a2-b87e-5442db707843","Type":"ContainerStarted","Data":"9b57a7458ef2f27c9ce74d9d1a03f238c97f6f0331840e49051c4c22ce3b8ee9"} Dec 02 13:45:44 crc kubenswrapper[4900]: I1202 13:45:44.810574 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" event={"ID":"1c63b5f6-db87-48a2-b87e-5442db707843","Type":"ContainerStarted","Data":"cf23d9ff3b3fe7bf8c6a01065c94d05c3bd87ad0460b6eba40bdee608a926b6c"} Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.811804 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tw2xg" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.812881 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4zkbx" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" Dec 02 13:45:44 crc kubenswrapper[4900]: E1202 13:45:44.825360 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rvppg" podUID="60614c3a-d991-4156-9d83-55ab06706291" Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.117052 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.117496 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.819612 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kzhwn" event={"ID":"1c63b5f6-db87-48a2-b87e-5442db707843","Type":"ContainerStarted","Data":"1a7723516337dce6089401fed004346b55a6876c9d48f35a4c85cceed207bdde"} Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.825485 4900 generic.go:334] "Generic (PLEG): container finished" podID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerID="c6d2e6693392fe607ab060038f4d2614a77a7fab1f08002973f29111372c3764" exitCode=0 Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.825703 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerDied","Data":"c6d2e6693392fe607ab060038f4d2614a77a7fab1f08002973f29111372c3764"} Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.829199 4900 generic.go:334] "Generic (PLEG): container finished" podID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerID="b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf" exitCode=0 Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.829266 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerDied","Data":"b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf"} Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.842925 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kzhwn" podStartSLOduration=169.842899952 podStartE2EDuration="2m49.842899952s" podCreationTimestamp="2025-12-02 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:45.838628262 +0000 UTC m=+191.254442113" watchObservedRunningTime="2025-12-02 13:45:45.842899952 +0000 UTC m=+191.258713813" Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.845345 4900 generic.go:334] "Generic (PLEG): container finished" podID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerID="412114cda5bc5f181b0268dcad24703556ff08baf0082aca8218126632c6aa73" exitCode=0 Dec 02 13:45:45 crc kubenswrapper[4900]: I1202 13:45:45.845424 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerDied","Data":"412114cda5bc5f181b0268dcad24703556ff08baf0082aca8218126632c6aa73"} Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.333220 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 13:45:46 crc kubenswrapper[4900]: E1202 13:45:46.333785 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e4fef-dc84-4c16-8725-4fe572a3162e" containerName="pruner" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.333801 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e4fef-dc84-4c16-8725-4fe572a3162e" containerName="pruner" Dec 02 13:45:46 crc kubenswrapper[4900]: E1202 13:45:46.333820 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574f0e39-03f2-420f-850c-a6c99574cd24" containerName="pruner" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.333830 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="574f0e39-03f2-420f-850c-a6c99574cd24" containerName="pruner" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.333951 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e4fef-dc84-4c16-8725-4fe572a3162e" containerName="pruner" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.333965 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="574f0e39-03f2-420f-850c-a6c99574cd24" containerName="pruner" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.334393 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.346071 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.346369 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.349869 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.475866 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.476276 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.578070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.578174 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.578185 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.601508 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.697946 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.867353 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerStarted","Data":"3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68"} Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.877865 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerStarted","Data":"2f85f1070e57c6616c808b9a66212b46ff8ddbd94b6c7802cc049bf6585af38d"} Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.891371 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerStarted","Data":"ded8e4e69d9bfaef5b21920f38cba22f54e934ee632c6b309b39e2751eda373c"} Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.895940 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s7ml6" podStartSLOduration=3.59905842 podStartE2EDuration="43.895911749s" podCreationTimestamp="2025-12-02 13:45:03 +0000 UTC" firstStartedPulling="2025-12-02 13:45:06.250900948 +0000 UTC m=+151.666714799" lastFinishedPulling="2025-12-02 13:45:46.547754267 +0000 UTC m=+191.963568128" observedRunningTime="2025-12-02 13:45:46.890996882 +0000 UTC m=+192.306810733" watchObservedRunningTime="2025-12-02 13:45:46.895911749 +0000 UTC m=+192.311725610" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.934265 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkxxn" podStartSLOduration=3.76054984 podStartE2EDuration="43.934220819s" podCreationTimestamp="2025-12-02 13:45:03 +0000 UTC" firstStartedPulling="2025-12-02 13:45:06.266805112 +0000 UTC m=+151.682618963" lastFinishedPulling="2025-12-02 13:45:46.440476081 +0000 UTC m=+191.856289942" observedRunningTime="2025-12-02 13:45:46.913591463 +0000 UTC m=+192.329405334" watchObservedRunningTime="2025-12-02 13:45:46.934220819 +0000 UTC m=+192.350034670" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.935073 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vs7wd" podStartSLOduration=1.875444334 podStartE2EDuration="39.935065353s" podCreationTimestamp="2025-12-02 13:45:07 +0000 UTC" firstStartedPulling="2025-12-02 13:45:08.36611151 +0000 UTC m=+153.781925361" lastFinishedPulling="2025-12-02 13:45:46.425732519 +0000 UTC m=+191.841546380" observedRunningTime="2025-12-02 13:45:46.931510353 +0000 UTC m=+192.347324224" watchObservedRunningTime="2025-12-02 13:45:46.935065353 +0000 UTC m=+192.350879204" Dec 02 13:45:46 crc kubenswrapper[4900]: I1202 13:45:46.952840 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 13:45:47 crc kubenswrapper[4900]: I1202 13:45:47.459008 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:47 crc kubenswrapper[4900]: I1202 13:45:47.459448 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:47 crc kubenswrapper[4900]: I1202 13:45:47.903537 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fed8e9ef-0d39-4fd7-9f86-2037114af8d3","Type":"ContainerStarted","Data":"2935bb7588fa1da938b32de6e55cc4fcbac2d3e65f2fa9e72a06ed0465179f35"} Dec 02 13:45:47 crc kubenswrapper[4900]: I1202 13:45:47.903952 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fed8e9ef-0d39-4fd7-9f86-2037114af8d3","Type":"ContainerStarted","Data":"be6f283650b3bee90d14cafb062112a4ccf88c1559e354c5cc023493f63a0347"} Dec 02 13:45:47 crc kubenswrapper[4900]: I1202 13:45:47.920759 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.9207414900000002 podStartE2EDuration="1.92074149s" podCreationTimestamp="2025-12-02 13:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:47.919161566 +0000 UTC m=+193.334975417" watchObservedRunningTime="2025-12-02 13:45:47.92074149 +0000 UTC m=+193.336555341" Dec 02 13:45:48 crc kubenswrapper[4900]: I1202 13:45:48.529295 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vs7wd" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="registry-server" probeResult="failure" output=< Dec 02 13:45:48 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 13:45:48 crc kubenswrapper[4900]: > Dec 02 13:45:48 crc kubenswrapper[4900]: I1202 13:45:48.922399 4900 generic.go:334] "Generic (PLEG): container finished" podID="fed8e9ef-0d39-4fd7-9f86-2037114af8d3" containerID="2935bb7588fa1da938b32de6e55cc4fcbac2d3e65f2fa9e72a06ed0465179f35" exitCode=0 Dec 02 13:45:48 crc kubenswrapper[4900]: I1202 13:45:48.923222 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fed8e9ef-0d39-4fd7-9f86-2037114af8d3","Type":"ContainerDied","Data":"2935bb7588fa1da938b32de6e55cc4fcbac2d3e65f2fa9e72a06ed0465179f35"} Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.174813 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.335619 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kubelet-dir\") pod \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.335758 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kube-api-access\") pod \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\" (UID: \"fed8e9ef-0d39-4fd7-9f86-2037114af8d3\") " Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.335823 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fed8e9ef-0d39-4fd7-9f86-2037114af8d3" (UID: "fed8e9ef-0d39-4fd7-9f86-2037114af8d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.336049 4900 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.347285 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fed8e9ef-0d39-4fd7-9f86-2037114af8d3" (UID: "fed8e9ef-0d39-4fd7-9f86-2037114af8d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.437414 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed8e9ef-0d39-4fd7-9f86-2037114af8d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.939393 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fed8e9ef-0d39-4fd7-9f86-2037114af8d3","Type":"ContainerDied","Data":"be6f283650b3bee90d14cafb062112a4ccf88c1559e354c5cc023493f63a0347"} Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.939468 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be6f283650b3bee90d14cafb062112a4ccf88c1559e354c5cc023493f63a0347" Dec 02 13:45:50 crc kubenswrapper[4900]: I1202 13:45:50.939485 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.929169 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 13:45:51 crc kubenswrapper[4900]: E1202 13:45:51.929685 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed8e9ef-0d39-4fd7-9f86-2037114af8d3" containerName="pruner" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.929700 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed8e9ef-0d39-4fd7-9f86-2037114af8d3" containerName="pruner" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.929816 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed8e9ef-0d39-4fd7-9f86-2037114af8d3" containerName="pruner" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.930204 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.933019 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.938254 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.945436 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.993891 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kube-api-access\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.993993 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:51 crc kubenswrapper[4900]: I1202 13:45:51.994060 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-var-lock\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.094528 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kube-api-access\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.094569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.094596 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-var-lock\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.094680 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.094747 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-var-lock\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.114539 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kube-api-access\") pod \"installer-9-crc\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.264413 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.460542 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 13:45:52 crc kubenswrapper[4900]: I1202 13:45:52.955567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4","Type":"ContainerStarted","Data":"d3c6d4e1b3e95ee0cad055b6e1ea02f7867089f8fa925bfc3c3df152c55d966d"} Dec 02 13:45:53 crc kubenswrapper[4900]: I1202 13:45:53.963480 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4","Type":"ContainerStarted","Data":"74eb0bf2ccd3b62f04becdff5d16af10723d7b6d8ee78ce29a3b0d7bf86e669c"} Dec 02 13:45:53 crc kubenswrapper[4900]: I1202 13:45:53.980014 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.979980012 podStartE2EDuration="2.979980012s" podCreationTimestamp="2025-12-02 13:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:53.977967427 +0000 UTC m=+199.393781268" watchObservedRunningTime="2025-12-02 13:45:53.979980012 +0000 UTC m=+199.395793873" Dec 02 13:45:53 crc kubenswrapper[4900]: I1202 13:45:53.993034 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:53 crc kubenswrapper[4900]: I1202 13:45:53.993230 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:54 crc kubenswrapper[4900]: I1202 13:45:54.047395 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:54 crc kubenswrapper[4900]: I1202 13:45:54.125870 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:54 crc kubenswrapper[4900]: I1202 13:45:54.125938 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:54 crc kubenswrapper[4900]: I1202 13:45:54.173966 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:55 crc kubenswrapper[4900]: I1202 13:45:55.015466 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:45:55 crc kubenswrapper[4900]: I1202 13:45:55.019264 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:55 crc kubenswrapper[4900]: I1202 13:45:55.282567 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkxxn"] Dec 02 13:45:56 crc kubenswrapper[4900]: I1202 13:45:56.980865 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkxxn" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="registry-server" containerID="cri-o://2f85f1070e57c6616c808b9a66212b46ff8ddbd94b6c7802cc049bf6585af38d" gracePeriod=2 Dec 02 13:45:57 crc kubenswrapper[4900]: I1202 13:45:57.511127 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:57 crc kubenswrapper[4900]: I1202 13:45:57.569329 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:45:57 crc kubenswrapper[4900]: I1202 13:45:57.990171 4900 generic.go:334] "Generic (PLEG): container finished" podID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerID="2f85f1070e57c6616c808b9a66212b46ff8ddbd94b6c7802cc049bf6585af38d" exitCode=0 Dec 02 13:45:57 crc kubenswrapper[4900]: I1202 13:45:57.990238 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerDied","Data":"2f85f1070e57c6616c808b9a66212b46ff8ddbd94b6c7802cc049bf6585af38d"} Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.595074 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.683401 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vs7wd"] Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.683735 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vs7wd" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="registry-server" containerID="cri-o://ded8e4e69d9bfaef5b21920f38cba22f54e934ee632c6b309b39e2751eda373c" gracePeriod=2 Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.720362 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-utilities\") pod \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.720453 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8jh\" (UniqueName: \"kubernetes.io/projected/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-kube-api-access-2d8jh\") pod \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.720512 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-catalog-content\") pod \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\" (UID: \"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd\") " Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.721273 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-utilities" (OuterVolumeSpecName: "utilities") pod "e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" (UID: "e5d2dac4-d5f2-4d22-82f6-9946054cd0fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.727590 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-kube-api-access-2d8jh" (OuterVolumeSpecName: "kube-api-access-2d8jh") pod "e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" (UID: "e5d2dac4-d5f2-4d22-82f6-9946054cd0fd"). InnerVolumeSpecName "kube-api-access-2d8jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.775451 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" (UID: "e5d2dac4-d5f2-4d22-82f6-9946054cd0fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.821662 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.821702 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8jh\" (UniqueName: \"kubernetes.io/projected/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-kube-api-access-2d8jh\") on node \"crc\" DevicePath \"\"" Dec 02 13:45:59 crc kubenswrapper[4900]: I1202 13:45:59.821714 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.005914 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkxxn" event={"ID":"e5d2dac4-d5f2-4d22-82f6-9946054cd0fd","Type":"ContainerDied","Data":"e58a49d60817437d912cf5e9fc46f70f2792d99be2abb0d02a2ec93deda8d5a9"} Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.005977 4900 scope.go:117] "RemoveContainer" containerID="2f85f1070e57c6616c808b9a66212b46ff8ddbd94b6c7802cc049bf6585af38d" Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.006002 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkxxn" Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.022703 4900 scope.go:117] "RemoveContainer" containerID="412114cda5bc5f181b0268dcad24703556ff08baf0082aca8218126632c6aa73" Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.037034 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkxxn"] Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.040088 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkxxn"] Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.058035 4900 scope.go:117] "RemoveContainer" containerID="506ab442b1ae273d4a05aa637d76b8238f124113532966eea1dee11ea8f4e72d" Dec 02 13:46:00 crc kubenswrapper[4900]: I1202 13:46:00.919498 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" path="/var/lib/kubelet/pods/e5d2dac4-d5f2-4d22-82f6-9946054cd0fd/volumes" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.016178 4900 generic.go:334] "Generic (PLEG): container finished" podID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerID="ded8e4e69d9bfaef5b21920f38cba22f54e934ee632c6b309b39e2751eda373c" exitCode=0 Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.016225 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerDied","Data":"ded8e4e69d9bfaef5b21920f38cba22f54e934ee632c6b309b39e2751eda373c"} Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.328758 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.441951 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrq79\" (UniqueName: \"kubernetes.io/projected/3353d0f3-48b8-4b6a-bb09-d19a523098b0-kube-api-access-jrq79\") pod \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.442451 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-catalog-content\") pod \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.442633 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-utilities\") pod \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\" (UID: \"3353d0f3-48b8-4b6a-bb09-d19a523098b0\") " Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.443635 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-utilities" (OuterVolumeSpecName: "utilities") pod "3353d0f3-48b8-4b6a-bb09-d19a523098b0" (UID: "3353d0f3-48b8-4b6a-bb09-d19a523098b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.485702 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3353d0f3-48b8-4b6a-bb09-d19a523098b0-kube-api-access-jrq79" (OuterVolumeSpecName: "kube-api-access-jrq79") pod "3353d0f3-48b8-4b6a-bb09-d19a523098b0" (UID: "3353d0f3-48b8-4b6a-bb09-d19a523098b0"). InnerVolumeSpecName "kube-api-access-jrq79". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.543877 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.543914 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrq79\" (UniqueName: \"kubernetes.io/projected/3353d0f3-48b8-4b6a-bb09-d19a523098b0-kube-api-access-jrq79\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.582029 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3353d0f3-48b8-4b6a-bb09-d19a523098b0" (UID: "3353d0f3-48b8-4b6a-bb09-d19a523098b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:46:01 crc kubenswrapper[4900]: I1202 13:46:01.645090 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3353d0f3-48b8-4b6a-bb09-d19a523098b0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.024947 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerStarted","Data":"d0673fffdf07eeef6e655e230c4fb5699c07e9ef4c16fa242bd2368463ffe31e"} Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.028191 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs7wd" Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.028187 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs7wd" event={"ID":"3353d0f3-48b8-4b6a-bb09-d19a523098b0","Type":"ContainerDied","Data":"07df76bb23142518c483577a571664ef20cd8de0b0f5382f2e9aa33b3201e34c"} Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.028287 4900 scope.go:117] "RemoveContainer" containerID="ded8e4e69d9bfaef5b21920f38cba22f54e934ee632c6b309b39e2751eda373c" Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.033297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerStarted","Data":"1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081"} Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.057966 4900 scope.go:117] "RemoveContainer" containerID="c6d2e6693392fe607ab060038f4d2614a77a7fab1f08002973f29111372c3764" Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.086383 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vs7wd"] Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.089171 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vs7wd"] Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.089510 4900 scope.go:117] "RemoveContainer" containerID="17b7c4ee107548049890f1eb0a6eeb760fc87c76b827265c2a03c32dfa3579fe" Dec 02 13:46:02 crc kubenswrapper[4900]: I1202 13:46:02.917133 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" path="/var/lib/kubelet/pods/3353d0f3-48b8-4b6a-bb09-d19a523098b0/volumes" Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.042384 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerID="d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426" exitCode=0 Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.042458 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2xg" event={"ID":"d5f5c196-c4c3-467a-882c-e8a39aabbede","Type":"ContainerDied","Data":"d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426"} Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.046521 4900 generic.go:334] "Generic (PLEG): container finished" podID="798784ef-f2ba-494b-a896-443aef626a69" containerID="d0673fffdf07eeef6e655e230c4fb5699c07e9ef4c16fa242bd2368463ffe31e" exitCode=0 Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.046579 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerDied","Data":"d0673fffdf07eeef6e655e230c4fb5699c07e9ef4c16fa242bd2368463ffe31e"} Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.057869 4900 generic.go:334] "Generic (PLEG): container finished" podID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerID="1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081" exitCode=0 Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.057928 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerDied","Data":"1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081"} Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.062171 4900 generic.go:334] "Generic (PLEG): container finished" podID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerID="d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800" exitCode=0 Dec 02 13:46:03 crc kubenswrapper[4900]: I1202 13:46:03.062236 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zkbx" event={"ID":"3dc4aaac-9b9e-42e6-b943-25236645d1b2","Type":"ContainerDied","Data":"d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800"} Dec 02 13:46:04 crc kubenswrapper[4900]: I1202 13:46:04.070854 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerStarted","Data":"572051a3030a88f10b1d1c2fbcf019be6ef5cfd055c28db1a42560e295e6964e"} Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.077558 4900 generic.go:334] "Generic (PLEG): container finished" podID="60614c3a-d991-4156-9d83-55ab06706291" containerID="572051a3030a88f10b1d1c2fbcf019be6ef5cfd055c28db1a42560e295e6964e" exitCode=0 Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.077690 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerDied","Data":"572051a3030a88f10b1d1c2fbcf019be6ef5cfd055c28db1a42560e295e6964e"} Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.082704 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerStarted","Data":"d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a"} Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.084109 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zkbx" event={"ID":"3dc4aaac-9b9e-42e6-b943-25236645d1b2","Type":"ContainerStarted","Data":"4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6"} Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.085960 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2xg" event={"ID":"d5f5c196-c4c3-467a-882c-e8a39aabbede","Type":"ContainerStarted","Data":"5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30"} Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.087562 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerStarted","Data":"8501e3eda5c75dadd634b726e9541092ecfcd20e83b35deb5f83a39648f3ba5f"} Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.116389 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tw2xg" podStartSLOduration=3.303544743 podStartE2EDuration="1m0.116362527s" podCreationTimestamp="2025-12-02 13:45:05 +0000 UTC" firstStartedPulling="2025-12-02 13:45:07.300859521 +0000 UTC m=+152.716673372" lastFinishedPulling="2025-12-02 13:46:04.113677305 +0000 UTC m=+209.529491156" observedRunningTime="2025-12-02 13:46:05.115403031 +0000 UTC m=+210.531216882" watchObservedRunningTime="2025-12-02 13:46:05.116362527 +0000 UTC m=+210.532176378" Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.138942 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d7jfg" podStartSLOduration=4.463407904 podStartE2EDuration="1m2.13891492s" podCreationTimestamp="2025-12-02 13:45:03 +0000 UTC" firstStartedPulling="2025-12-02 13:45:06.282041597 +0000 UTC m=+151.697855448" lastFinishedPulling="2025-12-02 13:46:03.957548623 +0000 UTC m=+209.373362464" observedRunningTime="2025-12-02 13:46:05.135226688 +0000 UTC m=+210.551040539" watchObservedRunningTime="2025-12-02 13:46:05.13891492 +0000 UTC m=+210.554728771" Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.155461 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4zkbx" podStartSLOduration=3.330188502 podStartE2EDuration="1m0.155433386s" podCreationTimestamp="2025-12-02 13:45:05 +0000 UTC" firstStartedPulling="2025-12-02 13:45:07.312453484 +0000 UTC m=+152.728267335" lastFinishedPulling="2025-12-02 13:46:04.137698368 +0000 UTC m=+209.553512219" observedRunningTime="2025-12-02 13:46:05.151889908 +0000 UTC m=+210.567703759" watchObservedRunningTime="2025-12-02 13:46:05.155433386 +0000 UTC m=+210.571247237" Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.175502 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2sjzb" podStartSLOduration=4.555303385 podStartE2EDuration="1m2.17548183s" podCreationTimestamp="2025-12-02 13:45:03 +0000 UTC" firstStartedPulling="2025-12-02 13:45:06.29073875 +0000 UTC m=+151.706552601" lastFinishedPulling="2025-12-02 13:46:03.910917175 +0000 UTC m=+209.326731046" observedRunningTime="2025-12-02 13:46:05.172525768 +0000 UTC m=+210.588339619" watchObservedRunningTime="2025-12-02 13:46:05.17548183 +0000 UTC m=+210.591295681" Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.737615 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.738821 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:46:05 crc kubenswrapper[4900]: I1202 13:46:05.786816 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:46:06 crc kubenswrapper[4900]: I1202 13:46:06.097394 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:46:06 crc kubenswrapper[4900]: I1202 13:46:06.097453 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:46:06 crc kubenswrapper[4900]: I1202 13:46:06.156234 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:46:07 crc kubenswrapper[4900]: I1202 13:46:07.110696 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerStarted","Data":"afd2d17e10a45c0e607700ffa0542ee6834160795527ca24e1f29d6856731ce3"} Dec 02 13:46:07 crc kubenswrapper[4900]: I1202 13:46:07.142722 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvppg" podStartSLOduration=3.205773591 podStartE2EDuration="1m1.142697263s" podCreationTimestamp="2025-12-02 13:45:06 +0000 UTC" firstStartedPulling="2025-12-02 13:45:08.334801055 +0000 UTC m=+153.750614906" lastFinishedPulling="2025-12-02 13:46:06.271724697 +0000 UTC m=+211.687538578" observedRunningTime="2025-12-02 13:46:07.137245822 +0000 UTC m=+212.553059693" watchObservedRunningTime="2025-12-02 13:46:07.142697263 +0000 UTC m=+212.558511114" Dec 02 13:46:13 crc kubenswrapper[4900]: I1202 13:46:13.940234 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:46:13 crc kubenswrapper[4900]: I1202 13:46:13.940983 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:46:14 crc kubenswrapper[4900]: I1202 13:46:14.014766 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:46:14 crc kubenswrapper[4900]: I1202 13:46:14.233567 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:46:14 crc kubenswrapper[4900]: I1202 13:46:14.356173 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:46:14 crc kubenswrapper[4900]: I1202 13:46:14.356248 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:46:14 crc kubenswrapper[4900]: I1202 13:46:14.417197 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.119006 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.119337 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.119468 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.120583 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.120833 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3" gracePeriod=600 Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.245473 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:46:15 crc kubenswrapper[4900]: I1202 13:46:15.803550 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:46:16 crc kubenswrapper[4900]: I1202 13:46:16.173415 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:46:16 crc kubenswrapper[4900]: I1202 13:46:16.668336 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d7jfg"] Dec 02 13:46:17 crc kubenswrapper[4900]: I1202 13:46:17.075460 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:46:17 crc kubenswrapper[4900]: I1202 13:46:17.075534 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:46:17 crc kubenswrapper[4900]: I1202 13:46:17.170917 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:46:17 crc kubenswrapper[4900]: I1202 13:46:17.190330 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d7jfg" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="registry-server" containerID="cri-o://8501e3eda5c75dadd634b726e9541092ecfcd20e83b35deb5f83a39648f3ba5f" gracePeriod=2 Dec 02 13:46:17 crc kubenswrapper[4900]: I1202 13:46:17.243532 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:46:17 crc kubenswrapper[4900]: I1202 13:46:17.772023 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dbc7k"] Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.056698 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2xg"] Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.057406 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tw2xg" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="registry-server" containerID="cri-o://5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30" gracePeriod=2 Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.198393 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3" exitCode=0 Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.199118 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3"} Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.201120 4900 generic.go:334] "Generic (PLEG): container finished" podID="798784ef-f2ba-494b-a896-443aef626a69" containerID="8501e3eda5c75dadd634b726e9541092ecfcd20e83b35deb5f83a39648f3ba5f" exitCode=0 Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.201172 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerDied","Data":"8501e3eda5c75dadd634b726e9541092ecfcd20e83b35deb5f83a39648f3ba5f"} Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.765750 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.856878 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-catalog-content\") pod \"798784ef-f2ba-494b-a896-443aef626a69\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.856978 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984sg\" (UniqueName: \"kubernetes.io/projected/798784ef-f2ba-494b-a896-443aef626a69-kube-api-access-984sg\") pod \"798784ef-f2ba-494b-a896-443aef626a69\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.857017 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-utilities\") pod \"798784ef-f2ba-494b-a896-443aef626a69\" (UID: \"798784ef-f2ba-494b-a896-443aef626a69\") " Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.858194 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-utilities" (OuterVolumeSpecName: "utilities") pod "798784ef-f2ba-494b-a896-443aef626a69" (UID: "798784ef-f2ba-494b-a896-443aef626a69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.886432 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798784ef-f2ba-494b-a896-443aef626a69-kube-api-access-984sg" (OuterVolumeSpecName: "kube-api-access-984sg") pod "798784ef-f2ba-494b-a896-443aef626a69" (UID: "798784ef-f2ba-494b-a896-443aef626a69"). InnerVolumeSpecName "kube-api-access-984sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.909268 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "798784ef-f2ba-494b-a896-443aef626a69" (UID: "798784ef-f2ba-494b-a896-443aef626a69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.959438 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.959484 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798784ef-f2ba-494b-a896-443aef626a69-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:18 crc kubenswrapper[4900]: I1202 13:46:18.959501 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-984sg\" (UniqueName: \"kubernetes.io/projected/798784ef-f2ba-494b-a896-443aef626a69-kube-api-access-984sg\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.191209 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.217700 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7jfg" event={"ID":"798784ef-f2ba-494b-a896-443aef626a69","Type":"ContainerDied","Data":"59772b68801801c779a6cfd362899e6447bc9aad26df4ab57e3694b65a57588a"} Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.217770 4900 scope.go:117] "RemoveContainer" containerID="8501e3eda5c75dadd634b726e9541092ecfcd20e83b35deb5f83a39648f3ba5f" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.219265 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7jfg" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.227348 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerID="5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30" exitCode=0 Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.227446 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2xg" event={"ID":"d5f5c196-c4c3-467a-882c-e8a39aabbede","Type":"ContainerDied","Data":"5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30"} Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.227486 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2xg" event={"ID":"d5f5c196-c4c3-467a-882c-e8a39aabbede","Type":"ContainerDied","Data":"defaa42d3b4660938dc62c1f9eb165a5712cdfe61aa0a33fe8f48b1140bf527b"} Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.227593 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2xg" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.231540 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"99208c08de62263a05d161e78ca2b735d405123b0b78d98c975543243600a6ba"} Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.245572 4900 scope.go:117] "RemoveContainer" containerID="d0673fffdf07eeef6e655e230c4fb5699c07e9ef4c16fa242bd2368463ffe31e" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.245750 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d7jfg"] Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.252818 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d7jfg"] Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.262405 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-utilities\") pod \"d5f5c196-c4c3-467a-882c-e8a39aabbede\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.262455 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-catalog-content\") pod \"d5f5c196-c4c3-467a-882c-e8a39aabbede\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.262542 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmjj\" (UniqueName: \"kubernetes.io/projected/d5f5c196-c4c3-467a-882c-e8a39aabbede-kube-api-access-ksmjj\") pod \"d5f5c196-c4c3-467a-882c-e8a39aabbede\" (UID: \"d5f5c196-c4c3-467a-882c-e8a39aabbede\") " Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.265902 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-utilities" (OuterVolumeSpecName: "utilities") pod "d5f5c196-c4c3-467a-882c-e8a39aabbede" (UID: "d5f5c196-c4c3-467a-882c-e8a39aabbede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.269401 4900 scope.go:117] "RemoveContainer" containerID="4fd5c1af7da3393801cf9a92e939a4c39213f7a0a046e2c0b8af7f60bcefed58" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.269555 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f5c196-c4c3-467a-882c-e8a39aabbede-kube-api-access-ksmjj" (OuterVolumeSpecName: "kube-api-access-ksmjj") pod "d5f5c196-c4c3-467a-882c-e8a39aabbede" (UID: "d5f5c196-c4c3-467a-882c-e8a39aabbede"). InnerVolumeSpecName "kube-api-access-ksmjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.287692 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5f5c196-c4c3-467a-882c-e8a39aabbede" (UID: "d5f5c196-c4c3-467a-882c-e8a39aabbede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.293285 4900 scope.go:117] "RemoveContainer" containerID="5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.316744 4900 scope.go:117] "RemoveContainer" containerID="d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.333157 4900 scope.go:117] "RemoveContainer" containerID="ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.357356 4900 scope.go:117] "RemoveContainer" containerID="5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30" Dec 02 13:46:19 crc kubenswrapper[4900]: E1202 13:46:19.357720 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30\": container with ID starting with 5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30 not found: ID does not exist" containerID="5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.357757 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30"} err="failed to get container status \"5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30\": rpc error: code = NotFound desc = could not find container \"5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30\": container with ID starting with 5a7f385ca1cc36808209b8cc3bb32d1f2f1a08b7ea0a8c47e7aeaa0d2b170c30 not found: ID does not exist" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.357781 4900 scope.go:117] "RemoveContainer" containerID="d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426" Dec 02 13:46:19 crc kubenswrapper[4900]: E1202 13:46:19.358056 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426\": container with ID starting with d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426 not found: ID does not exist" containerID="d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.358084 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426"} err="failed to get container status \"d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426\": rpc error: code = NotFound desc = could not find container \"d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426\": container with ID starting with d054f6cd78a34aa30d33058a6a70dd72a370275af9b6ea6409e39d0ee56d7426 not found: ID does not exist" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.358106 4900 scope.go:117] "RemoveContainer" containerID="ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d" Dec 02 13:46:19 crc kubenswrapper[4900]: E1202 13:46:19.358316 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d\": container with ID starting with ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d not found: ID does not exist" containerID="ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.358333 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d"} err="failed to get container status \"ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d\": rpc error: code = NotFound desc = could not find container \"ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d\": container with ID starting with ca492db0037642ca8876da313a96d25ab6ae2f698c872aab0879a446f157a62d not found: ID does not exist" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.363989 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.364015 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f5c196-c4c3-467a-882c-e8a39aabbede-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.364030 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmjj\" (UniqueName: \"kubernetes.io/projected/d5f5c196-c4c3-467a-882c-e8a39aabbede-kube-api-access-ksmjj\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.558432 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2xg"] Dec 02 13:46:19 crc kubenswrapper[4900]: I1202 13:46:19.563990 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2xg"] Dec 02 13:46:20 crc kubenswrapper[4900]: I1202 13:46:20.935739 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798784ef-f2ba-494b-a896-443aef626a69" path="/var/lib/kubelet/pods/798784ef-f2ba-494b-a896-443aef626a69/volumes" Dec 02 13:46:20 crc kubenswrapper[4900]: I1202 13:46:20.937614 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" path="/var/lib/kubelet/pods/d5f5c196-c4c3-467a-882c-e8a39aabbede/volumes" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.255973 4900 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.256922 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.256949 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.256973 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.256985 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257004 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257017 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257034 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257046 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257069 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257082 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257097 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257109 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257130 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257142 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257160 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257172 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257190 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257205 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257226 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257238 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="extract-utilities" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257261 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257273 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.257295 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257307 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="extract-content" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257482 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d2dac4-d5f2-4d22-82f6-9946054cd0fd" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257505 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3353d0f3-48b8-4b6a-bb09-d19a523098b0" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257526 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="798784ef-f2ba-494b-a896-443aef626a69" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.257550 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f5c196-c4c3-467a-882c-e8a39aabbede" containerName="registry-server" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258055 4900 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258356 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258521 4900 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258734 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae" gracePeriod=15 Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258778 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a" gracePeriod=15 Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258804 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0" gracePeriod=15 Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258750 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297" gracePeriod=15 Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.258878 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819" gracePeriod=15 Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.261720 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.261772 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.261793 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.261807 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.261826 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.261838 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.261854 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.261867 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.261898 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.261910 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.261928 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.261952 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262136 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262161 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262176 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262193 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262211 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.262515 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262534 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.262728 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.264292 4900 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.264831 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.264906 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.264958 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.265121 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.265164 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.314236 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.366336 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.366394 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.366460 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.366497 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.366524 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.366633 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.367045 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.367087 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.367117 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.367148 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.468562 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.468614 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.468666 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.570263 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.570339 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.570425 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.570476 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.570441 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.570377 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: I1202 13:46:31.604804 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:46:31 crc kubenswrapper[4900]: E1202 13:46:31.646355 4900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d6a035b1adaca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:46:31.64503521 +0000 UTC m=+237.060849101,LastTimestamp:2025-12-02 13:46:31.64503521 +0000 UTC m=+237.060849101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.321760 4900 generic.go:334] "Generic (PLEG): container finished" podID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" containerID="74eb0bf2ccd3b62f04becdff5d16af10723d7b6d8ee78ce29a3b0d7bf86e669c" exitCode=0 Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.321871 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4","Type":"ContainerDied","Data":"74eb0bf2ccd3b62f04becdff5d16af10723d7b6d8ee78ce29a3b0d7bf86e669c"} Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.322981 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.323796 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.325614 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.327910 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.329329 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a" exitCode=0 Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.329371 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0" exitCode=0 Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.329386 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297" exitCode=0 Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.329402 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819" exitCode=2 Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.329445 4900 scope.go:117] "RemoveContainer" containerID="97683d24f53d842d5a94c192abaadc20e4c99daf1ff2ed479173e5c14aea2f33" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.332411 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b9163bb656aae0e404e2f0e34a6c82b056577628ae9cc1240aa88ed1b1470aaa"} Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.332484 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d8e214092b12522db18e0f2afcd973e50ce556ca03e2369071abb595e63f94d4"} Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.333503 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:32 crc kubenswrapper[4900]: I1202 13:46:32.334282 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.345191 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.741223 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.742672 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.743675 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.746462 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.747832 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.748227 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.748442 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.748725 4900 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910578 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910691 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910784 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910826 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910896 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910921 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kube-api-access\") pod \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.910954 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911035 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-var-lock\") pod \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911120 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kubelet-dir\") pod \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\" (UID: \"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4\") " Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911125 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-var-lock" (OuterVolumeSpecName: "var-lock") pod "9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" (UID: "9b64aa66-b5d3-4ffe-8686-f330c73f0ba4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911243 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" (UID: "9b64aa66-b5d3-4ffe-8686-f330c73f0ba4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911517 4900 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911544 4900 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911561 4900 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911578 4900 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.911596 4900 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:33 crc kubenswrapper[4900]: I1202 13:46:33.916819 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" (UID: "9b64aa66-b5d3-4ffe-8686-f330c73f0ba4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.013354 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b64aa66-b5d3-4ffe-8686-f330c73f0ba4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.375722 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9b64aa66-b5d3-4ffe-8686-f330c73f0ba4","Type":"ContainerDied","Data":"d3c6d4e1b3e95ee0cad055b6e1ea02f7867089f8fa925bfc3c3df152c55d966d"} Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.376193 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c6d4e1b3e95ee0cad055b6e1ea02f7867089f8fa925bfc3c3df152c55d966d" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.376304 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.393238 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.398226 4900 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae" exitCode=0 Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.398320 4900 scope.go:117] "RemoveContainer" containerID="8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.398461 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.399144 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.399819 4900 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.400260 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.439037 4900 scope.go:117] "RemoveContainer" containerID="ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.444201 4900 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.444858 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.445734 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.475957 4900 scope.go:117] "RemoveContainer" containerID="5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.500206 4900 scope.go:117] "RemoveContainer" containerID="7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.527763 4900 scope.go:117] "RemoveContainer" containerID="80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.568070 4900 scope.go:117] "RemoveContainer" containerID="671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.602272 4900 scope.go:117] "RemoveContainer" containerID="8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a" Dec 02 13:46:34 crc kubenswrapper[4900]: E1202 13:46:34.603780 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\": container with ID starting with 8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a not found: ID does not exist" containerID="8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.603831 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a"} err="failed to get container status \"8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\": rpc error: code = NotFound desc = could not find container \"8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a\": container with ID starting with 8c1d905e98ee6f0e47186338452342c4d09f375941c58a9a9265d26c00a20c0a not found: ID does not exist" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.604041 4900 scope.go:117] "RemoveContainer" containerID="ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0" Dec 02 13:46:34 crc kubenswrapper[4900]: E1202 13:46:34.604533 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\": container with ID starting with ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0 not found: ID does not exist" containerID="ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.604636 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0"} err="failed to get container status \"ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\": rpc error: code = NotFound desc = could not find container \"ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0\": container with ID starting with ec8f7f699579ca6980cb28214b74d4ae6c5786dac4e248ee83e16dd48627acd0 not found: ID does not exist" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.604744 4900 scope.go:117] "RemoveContainer" containerID="5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297" Dec 02 13:46:34 crc kubenswrapper[4900]: E1202 13:46:34.605458 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\": container with ID starting with 5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297 not found: ID does not exist" containerID="5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.605531 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297"} err="failed to get container status \"5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\": rpc error: code = NotFound desc = could not find container \"5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297\": container with ID starting with 5fae585cee74a72c6b4edf4f4c8c329a9d269a2725563eb5eef2d7b739d0e297 not found: ID does not exist" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.605567 4900 scope.go:117] "RemoveContainer" containerID="7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819" Dec 02 13:46:34 crc kubenswrapper[4900]: E1202 13:46:34.606310 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\": container with ID starting with 7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819 not found: ID does not exist" containerID="7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.606340 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819"} err="failed to get container status \"7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\": rpc error: code = NotFound desc = could not find container \"7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819\": container with ID starting with 7feb00e4ce5d66c8fa314dc4ecb5d2e08e5db34316d960530abb412eda8fa819 not found: ID does not exist" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.606359 4900 scope.go:117] "RemoveContainer" containerID="80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae" Dec 02 13:46:34 crc kubenswrapper[4900]: E1202 13:46:34.606718 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\": container with ID starting with 80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae not found: ID does not exist" containerID="80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.606772 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae"} err="failed to get container status \"80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\": rpc error: code = NotFound desc = could not find container \"80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae\": container with ID starting with 80699cce79bf6bba860e42fa9513b715ae7b6b710fd8c131321a2bb2b1e5b8ae not found: ID does not exist" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.606807 4900 scope.go:117] "RemoveContainer" containerID="671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc" Dec 02 13:46:34 crc kubenswrapper[4900]: E1202 13:46:34.607386 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\": container with ID starting with 671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc not found: ID does not exist" containerID="671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.607458 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc"} err="failed to get container status \"671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\": rpc error: code = NotFound desc = could not find container \"671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc\": container with ID starting with 671585d3aa171610fed87d5662977a8e45d21f8eee57ee6b2e0cd44e3bb310bc not found: ID does not exist" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.913306 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.914330 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.914911 4900 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:34 crc kubenswrapper[4900]: I1202 13:46:34.926502 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 02 13:46:38 crc kubenswrapper[4900]: E1202 13:46:38.287058 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:46:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:46:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:46:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:46:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:38 crc kubenswrapper[4900]: E1202 13:46:38.288245 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:38 crc kubenswrapper[4900]: E1202 13:46:38.288855 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:38 crc kubenswrapper[4900]: E1202 13:46:38.289203 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:38 crc kubenswrapper[4900]: E1202 13:46:38.289561 4900 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:38 crc kubenswrapper[4900]: E1202 13:46:38.289605 4900 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.188819 4900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d6a035b1adaca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:46:31.64503521 +0000 UTC m=+237.060849101,LastTimestamp:2025-12-02 13:46:31.64503521 +0000 UTC m=+237.060849101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.471761 4900 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.472612 4900 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.473321 4900 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.473829 4900 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.474339 4900 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:39 crc kubenswrapper[4900]: I1202 13:46:39.474399 4900 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.474977 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Dec 02 13:46:39 crc kubenswrapper[4900]: E1202 13:46:39.676388 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Dec 02 13:46:40 crc kubenswrapper[4900]: E1202 13:46:40.077839 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Dec 02 13:46:40 crc kubenswrapper[4900]: E1202 13:46:40.878965 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Dec 02 13:46:42 crc kubenswrapper[4900]: E1202 13:46:42.480310 4900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.810623 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" containerName="oauth-openshift" containerID="cri-o://29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9" gracePeriod=15 Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.910096 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.911429 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.912197 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:42 crc kubenswrapper[4900]: E1202 13:46:42.956551 4900 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" volumeName="registry-storage" Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.980449 4900 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.980809 4900 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:42 crc kubenswrapper[4900]: E1202 13:46:42.981464 4900 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:42 crc kubenswrapper[4900]: I1202 13:46:42.982741 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.216395 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.216872 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.217092 4900 status_manager.go:851] "Failed to get status for pod" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dbc7k\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.217532 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.271593 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-session\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.271727 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-service-ca\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.271775 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-cliconfig\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.271844 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-trusted-ca-bundle\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.271922 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-login\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272017 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqnsp\" (UniqueName: \"kubernetes.io/projected/eb44d009-5920-4606-aba3-aaf7104b1a22-kube-api-access-bqnsp\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272087 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-error\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272108 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-idp-0-file-data\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272138 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-ocp-branding-template\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272163 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-policies\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272250 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-router-certs\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272281 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-dir\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272313 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-serving-cert\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.272347 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-provider-selection\") pod \"eb44d009-5920-4606-aba3-aaf7104b1a22\" (UID: \"eb44d009-5920-4606-aba3-aaf7104b1a22\") " Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.273065 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.273080 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.274115 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.274853 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.275230 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.281237 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.281591 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.281809 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.282457 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.282953 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.283106 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.283368 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb44d009-5920-4606-aba3-aaf7104b1a22-kube-api-access-bqnsp" (OuterVolumeSpecName: "kube-api-access-bqnsp") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "kube-api-access-bqnsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.283585 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.283905 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "eb44d009-5920-4606-aba3-aaf7104b1a22" (UID: "eb44d009-5920-4606-aba3-aaf7104b1a22"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375036 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375099 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375122 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375144 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375166 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375186 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqnsp\" (UniqueName: \"kubernetes.io/projected/eb44d009-5920-4606-aba3-aaf7104b1a22-kube-api-access-bqnsp\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375208 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375225 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375244 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375266 4900 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375286 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375306 4900 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb44d009-5920-4606-aba3-aaf7104b1a22-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375324 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.375347 4900 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb44d009-5920-4606-aba3-aaf7104b1a22-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.471948 4900 generic.go:334] "Generic (PLEG): container finished" podID="eb44d009-5920-4606-aba3-aaf7104b1a22" containerID="29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9" exitCode=0 Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.472112 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.472066 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" event={"ID":"eb44d009-5920-4606-aba3-aaf7104b1a22","Type":"ContainerDied","Data":"29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9"} Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.472409 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" event={"ID":"eb44d009-5920-4606-aba3-aaf7104b1a22","Type":"ContainerDied","Data":"c820e8f56dc30dfb25925b657ef8e62f670261e15215f14f989a378920312461"} Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.472452 4900 scope.go:117] "RemoveContainer" containerID="29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.473714 4900 status_manager.go:851] "Failed to get status for pod" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dbc7k\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475258 4900 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b12d4b485fd201860ff02e466e0908efd1a6e345ee4acdd7c2795036774bd4b5" exitCode=0 Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475323 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475315 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b12d4b485fd201860ff02e466e0908efd1a6e345ee4acdd7c2795036774bd4b5"} Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475404 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5337a1e3ec42910f8c6c0f6a6bc7f622a03e40f70ac5b444d05bd37dd04b6c1a"} Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475659 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475903 4900 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.475935 4900 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.476085 4900 status_manager.go:851] "Failed to get status for pod" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dbc7k\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: E1202 13:46:43.476410 4900 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.476669 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.477224 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.500096 4900 status_manager.go:851] "Failed to get status for pod" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" pod="openshift-authentication/oauth-openshift-558db77b4-dbc7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-dbc7k\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.500607 4900 status_manager.go:851] "Failed to get status for pod" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.501135 4900 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.514924 4900 scope.go:117] "RemoveContainer" containerID="29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9" Dec 02 13:46:43 crc kubenswrapper[4900]: E1202 13:46:43.515622 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9\": container with ID starting with 29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9 not found: ID does not exist" containerID="29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9" Dec 02 13:46:43 crc kubenswrapper[4900]: I1202 13:46:43.515697 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9"} err="failed to get container status \"29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9\": rpc error: code = NotFound desc = could not find container \"29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9\": container with ID starting with 29b58769cfbe6fcc5e96c1ccd7f8dbe92769a2cad1d297a361f969b9d2ef10c9 not found: ID does not exist" Dec 02 13:46:44 crc kubenswrapper[4900]: I1202 13:46:44.486810 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f84d89abdccf0f02a1f03db9831c16971fb921aaf1ffa1c5f697aa95f3a84155"} Dec 02 13:46:44 crc kubenswrapper[4900]: I1202 13:46:44.487138 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7767a973f944caca2ab45e10babd89427392e5e3209384640d921030a2ef1463"} Dec 02 13:46:44 crc kubenswrapper[4900]: I1202 13:46:44.539140 4900 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:46:44 crc kubenswrapper[4900]: I1202 13:46:44.539202 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.496775 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.497077 4900 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d" exitCode=1 Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.497156 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d"} Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.497860 4900 scope.go:117] "RemoveContainer" containerID="c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d" Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.503208 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c98a471b71255e40bffd329c53d2740b55e59e0966d8371d19f1d5133962d8b6"} Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.503266 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"89270db8c798e9501f3782f929723226ea3f16e85123de245ffc91f12ec2b99c"} Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.503282 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f8fe8f97b86a83f97382ced7923fc26a49a6742ab1862558d801d41bca40f113"} Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.503397 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.503550 4900 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:45 crc kubenswrapper[4900]: I1202 13:46:45.503583 4900 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:46 crc kubenswrapper[4900]: I1202 13:46:46.512403 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 13:46:46 crc kubenswrapper[4900]: I1202 13:46:46.512470 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8015b27f51b261eeafb12f31021a1e789052503f99506c86f0090c0a0121dca9"} Dec 02 13:46:47 crc kubenswrapper[4900]: I1202 13:46:47.983206 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:47 crc kubenswrapper[4900]: I1202 13:46:47.983703 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:47 crc kubenswrapper[4900]: I1202 13:46:47.993803 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.282258 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.282575 4900 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.282633 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.518990 4900 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.547309 4900 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.547360 4900 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.552232 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:46:50 crc kubenswrapper[4900]: I1202 13:46:50.628696 4900 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b41565a6-bb96-4c6c-8411-32f462c78065" Dec 02 13:46:51 crc kubenswrapper[4900]: I1202 13:46:51.555083 4900 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:51 crc kubenswrapper[4900]: I1202 13:46:51.555144 4900 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49f275c1-19ff-4729-9cb5-736ec1525302" Dec 02 13:46:54 crc kubenswrapper[4900]: I1202 13:46:54.538959 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:46:54 crc kubenswrapper[4900]: I1202 13:46:54.947409 4900 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b41565a6-bb96-4c6c-8411-32f462c78065" Dec 02 13:46:57 crc kubenswrapper[4900]: I1202 13:46:57.095010 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 13:46:58 crc kubenswrapper[4900]: I1202 13:46:58.126998 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 13:46:59 crc kubenswrapper[4900]: I1202 13:46:59.271501 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 13:47:00 crc kubenswrapper[4900]: I1202 13:47:00.282656 4900 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:47:00 crc kubenswrapper[4900]: I1202 13:47:00.282728 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:47:01 crc kubenswrapper[4900]: I1202 13:47:01.143891 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 13:47:01 crc kubenswrapper[4900]: I1202 13:47:01.231701 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 13:47:02 crc kubenswrapper[4900]: I1202 13:47:02.332180 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 13:47:02 crc kubenswrapper[4900]: I1202 13:47:02.431266 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.067042 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.201330 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.202346 4900 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.404279 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.461762 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.503222 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.630130 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.675813 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.692821 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.718213 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 13:47:03 crc kubenswrapper[4900]: I1202 13:47:03.787124 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.061806 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.068092 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.156276 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.185130 4900 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.193541 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=33.193510007 podStartE2EDuration="33.193510007s" podCreationTimestamp="2025-12-02 13:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:50.473204801 +0000 UTC m=+255.889018692" watchObservedRunningTime="2025-12-02 13:47:04.193510007 +0000 UTC m=+269.609323898" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.194482 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-dbc7k"] Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.194566 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.203675 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.227793 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.227763396 podStartE2EDuration="14.227763396s" podCreationTimestamp="2025-12-02 13:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:47:04.225357329 +0000 UTC m=+269.641171210" watchObservedRunningTime="2025-12-02 13:47:04.227763396 +0000 UTC m=+269.643577287" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.271569 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.272723 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.487589 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.674222 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.732279 4900 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.854751 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 13:47:04 crc kubenswrapper[4900]: I1202 13:47:04.923360 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" path="/var/lib/kubelet/pods/eb44d009-5920-4606-aba3-aaf7104b1a22/volumes" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.016936 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.020974 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.124000 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.230721 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.262146 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.300803 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.564177 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.600380 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.646259 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.786626 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.794739 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.804543 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 13:47:05 crc kubenswrapper[4900]: I1202 13:47:05.922390 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.020419 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.186347 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.186673 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.243504 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.417823 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.483982 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.532105 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:47:06 crc kubenswrapper[4900]: I1202 13:47:06.982346 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.008010 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.037391 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.054189 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.102401 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.163669 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.191932 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.315237 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.316349 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.388681 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.402171 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.402247 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.517808 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.520950 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.616035 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.619251 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.779336 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.792016 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 13:47:07 crc kubenswrapper[4900]: I1202 13:47:07.962966 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.026675 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.088810 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.181011 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.299637 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.330297 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.515711 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.597785 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.646349 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.885521 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.908227 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.918743 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 13:47:08 crc kubenswrapper[4900]: I1202 13:47:08.992519 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.000716 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.106038 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.129004 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.148497 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.169764 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.194687 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.211113 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.211224 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.389588 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.548852 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.548885 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.594220 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.646907 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.733486 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.796832 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.812901 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.959161 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 13:47:09 crc kubenswrapper[4900]: I1202 13:47:09.979813 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.057011 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.228251 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.316435 4900 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.316526 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.316615 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.317463 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"8015b27f51b261eeafb12f31021a1e789052503f99506c86f0090c0a0121dca9"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.317587 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://8015b27f51b261eeafb12f31021a1e789052503f99506c86f0090c0a0121dca9" gracePeriod=30 Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.331028 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.443979 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.509156 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.511296 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.511621 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.555534 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.579171 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.586757 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.650373 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.662150 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.663965 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.692817 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.715241 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.757261 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.760005 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7544d6d989-6ddrr"] Dec 02 13:47:10 crc kubenswrapper[4900]: E1202 13:47:10.760225 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" containerName="installer" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.760239 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" containerName="installer" Dec 02 13:47:10 crc kubenswrapper[4900]: E1202 13:47:10.760250 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" containerName="oauth-openshift" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.760256 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" containerName="oauth-openshift" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.760353 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb44d009-5920-4606-aba3-aaf7104b1a22" containerName="oauth-openshift" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.760362 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b64aa66-b5d3-4ffe-8686-f330c73f0ba4" containerName="installer" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.760840 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.764284 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.765214 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.765253 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.765294 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.765253 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.765917 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.766123 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.766177 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.766135 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.766883 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.769088 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.770780 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.771474 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.776344 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7544d6d989-6ddrr"] Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.781956 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.785707 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.789081 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.842601 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850329 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-audit-policies\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850403 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-audit-dir\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850475 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-session\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850563 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850619 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-login\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850700 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-error\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850757 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850817 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850898 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.850939 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.851006 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgw9d\" (UniqueName: \"kubernetes.io/projected/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-kube-api-access-mgw9d\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.851044 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.851090 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.851122 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.888536 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.948699 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952045 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952150 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgw9d\" (UniqueName: \"kubernetes.io/projected/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-kube-api-access-mgw9d\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952205 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952263 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952316 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952385 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-audit-policies\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952430 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-audit-dir\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952482 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-session\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952543 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952591 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-login\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952668 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-error\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952750 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952813 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.952864 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.953742 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-audit-dir\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.954705 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.954735 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-audit-policies\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.956972 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.957197 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.959030 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-session\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.959064 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.959592 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-login\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.960710 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-error\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.961511 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.961827 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.962928 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.964932 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.965452 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.975581 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgw9d\" (UniqueName: \"kubernetes.io/projected/5ba21aee-dbe4-4403-ae72-f9fd5838bdb9-kube-api-access-mgw9d\") pod \"oauth-openshift-7544d6d989-6ddrr\" (UID: \"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9\") " pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:10 crc kubenswrapper[4900]: I1202 13:47:10.985622 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.011412 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.064689 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.069115 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.090458 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.247497 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.322904 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7544d6d989-6ddrr"] Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.441806 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.461190 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.469136 4900 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.490101 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.627997 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.716234 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" event={"ID":"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9","Type":"ContainerStarted","Data":"243c3d5ea8b5836d284455b6fe95b74fbcc1ce3b99cdd6e4c6764a6cfbc4dc32"} Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.716523 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.716607 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" event={"ID":"5ba21aee-dbe4-4403-ae72-f9fd5838bdb9","Type":"ContainerStarted","Data":"cd102a3982b5db402463c236a5499505ce0565f8ca13eb860b6f5f640767ac0b"} Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.727170 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.744579 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" podStartSLOduration=54.744550263 podStartE2EDuration="54.744550263s" podCreationTimestamp="2025-12-02 13:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:47:11.74410053 +0000 UTC m=+277.159914411" watchObservedRunningTime="2025-12-02 13:47:11.744550263 +0000 UTC m=+277.160364154" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.840431 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.886273 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.915289 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 13:47:11 crc kubenswrapper[4900]: I1202 13:47:11.916518 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.074019 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.077400 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.126149 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.152358 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.161938 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7544d6d989-6ddrr" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.165679 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.234930 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.344316 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.387728 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.407508 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.514053 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.528085 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.590172 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.616401 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.636194 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.648365 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.716839 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.765583 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.777972 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.860776 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.899863 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.908301 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.936810 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 13:47:12 crc kubenswrapper[4900]: I1202 13:47:12.959326 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.039969 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.056764 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.065566 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.224473 4900 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.224970 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b9163bb656aae0e404e2f0e34a6c82b056577628ae9cc1240aa88ed1b1470aaa" gracePeriod=5 Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.272706 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.301908 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.310191 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.386537 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.398795 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.399385 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.434027 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.579790 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.680574 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.713661 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.818198 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.845927 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.931429 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.960530 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.975989 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 13:47:13 crc kubenswrapper[4900]: I1202 13:47:13.994316 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.000705 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.223429 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.272775 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.294394 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.302044 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.319795 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.340560 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.380335 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.386923 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.398299 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.400983 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.482478 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.544771 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.566482 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.588147 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.655250 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.808704 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.853447 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.863600 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 13:47:14 crc kubenswrapper[4900]: I1202 13:47:14.931017 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.117936 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.302975 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.397865 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.451271 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.507423 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.510189 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.642824 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.670068 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.834481 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.902280 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 13:47:15 crc kubenswrapper[4900]: I1202 13:47:15.904377 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.005194 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.035172 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.328166 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.347505 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.351297 4900 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.467362 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.474893 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.503583 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.581746 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.605208 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.720842 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.732484 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.826042 4900 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.827698 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.832563 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 13:47:16 crc kubenswrapper[4900]: I1202 13:47:16.969338 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.082373 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.123057 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.176552 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.423541 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.668221 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.796541 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 13:47:17 crc kubenswrapper[4900]: I1202 13:47:17.943094 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.132482 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 13:47:18 crc kubenswrapper[4900]: E1202 13:47:18.389595 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-conmon-b9163bb656aae0e404e2f0e34a6c82b056577628ae9cc1240aa88ed1b1470aaa.scope\": RecentStats: unable to find data in memory cache]" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.394546 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.472218 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.527952 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.574503 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.769467 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.769568 4900 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b9163bb656aae0e404e2f0e34a6c82b056577628ae9cc1240aa88ed1b1470aaa" exitCode=137 Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.834520 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.834688 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.866333 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.885295 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.885561 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.885697 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.885856 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.887842 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.887937 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.887934 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.903959 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.921360 4900 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.951044 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.951130 4900 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f3b13fcf-962f-4983-af2d-e25bbbd65ecf" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.951158 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.951170 4900 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f3b13fcf-962f-4983-af2d-e25bbbd65ecf" Dec 02 13:47:18 crc kubenswrapper[4900]: I1202 13:47:18.997401 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:18.997501 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:18.999216 4900 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:18.999259 4900 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:18.999278 4900 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:18.999296 4900 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:18.999313 4900 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:19.784554 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:19.785091 4900 scope.go:117] "RemoveContainer" containerID="b9163bb656aae0e404e2f0e34a6c82b056577628ae9cc1240aa88ed1b1470aaa" Dec 02 13:47:19 crc kubenswrapper[4900]: I1202 13:47:19.785909 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:47:20 crc kubenswrapper[4900]: I1202 13:47:20.251898 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 13:47:20 crc kubenswrapper[4900]: I1202 13:47:20.919542 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 02 13:47:21 crc kubenswrapper[4900]: I1202 13:47:21.285188 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 13:47:36 crc kubenswrapper[4900]: I1202 13:47:36.907986 4900 generic.go:334] "Generic (PLEG): container finished" podID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerID="959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc" exitCode=0 Dec 02 13:47:36 crc kubenswrapper[4900]: I1202 13:47:36.908103 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" event={"ID":"de46dffd-919a-4df1-9d52-cbf1d14b8205","Type":"ContainerDied","Data":"959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc"} Dec 02 13:47:36 crc kubenswrapper[4900]: I1202 13:47:36.911033 4900 scope.go:117] "RemoveContainer" containerID="959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc" Dec 02 13:47:37 crc kubenswrapper[4900]: I1202 13:47:37.921028 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" event={"ID":"de46dffd-919a-4df1-9d52-cbf1d14b8205","Type":"ContainerStarted","Data":"c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738"} Dec 02 13:47:37 crc kubenswrapper[4900]: I1202 13:47:37.922218 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:47:37 crc kubenswrapper[4900]: I1202 13:47:37.925836 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:47:40 crc kubenswrapper[4900]: I1202 13:47:40.944939 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 02 13:47:40 crc kubenswrapper[4900]: I1202 13:47:40.948470 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 13:47:40 crc kubenswrapper[4900]: I1202 13:47:40.948580 4900 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8015b27f51b261eeafb12f31021a1e789052503f99506c86f0090c0a0121dca9" exitCode=137 Dec 02 13:47:40 crc kubenswrapper[4900]: I1202 13:47:40.948716 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8015b27f51b261eeafb12f31021a1e789052503f99506c86f0090c0a0121dca9"} Dec 02 13:47:40 crc kubenswrapper[4900]: I1202 13:47:40.948897 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b1381ef118e32a1e3e01fa80db8dc0f36be7e53532a5a776a0808d5447eb1d56"} Dec 02 13:47:40 crc kubenswrapper[4900]: I1202 13:47:40.948983 4900 scope.go:117] "RemoveContainer" containerID="c4cf112d50a8d09fb39927314bdbf56b4fa405786f90e333560c045defc7cf9d" Dec 02 13:47:41 crc kubenswrapper[4900]: I1202 13:47:41.970012 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 02 13:47:44 crc kubenswrapper[4900]: I1202 13:47:44.539437 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:47:50 crc kubenswrapper[4900]: I1202 13:47:50.281597 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:47:50 crc kubenswrapper[4900]: I1202 13:47:50.289040 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:47:51 crc kubenswrapper[4900]: I1202 13:47:51.052532 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:48:01 crc kubenswrapper[4900]: I1202 13:48:01.401461 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mq2gm"] Dec 02 13:48:01 crc kubenswrapper[4900]: I1202 13:48:01.402179 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" podUID="a72dd04d-bb06-4b3a-9f08-d68072239bd8" containerName="controller-manager" containerID="cri-o://d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3" gracePeriod=30 Dec 02 13:48:01 crc kubenswrapper[4900]: I1202 13:48:01.532482 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6"] Dec 02 13:48:01 crc kubenswrapper[4900]: I1202 13:48:01.532714 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" podUID="65ea3056-b990-4a94-a5aa-56a2a0f24b92" containerName="route-controller-manager" containerID="cri-o://f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60" gracePeriod=30 Dec 02 13:48:01 crc kubenswrapper[4900]: I1202 13:48:01.881541 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.014344 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.019027 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-client-ca\") pod \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.019707 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-client-ca" (OuterVolumeSpecName: "client-ca") pod "a72dd04d-bb06-4b3a-9f08-d68072239bd8" (UID: "a72dd04d-bb06-4b3a-9f08-d68072239bd8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.019955 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-proxy-ca-bundles\") pod \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.020377 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a72dd04d-bb06-4b3a-9f08-d68072239bd8" (UID: "a72dd04d-bb06-4b3a-9f08-d68072239bd8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.020457 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-config\") pod \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.020505 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72dd04d-bb06-4b3a-9f08-d68072239bd8-serving-cert\") pod \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.020980 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-config" (OuterVolumeSpecName: "config") pod "a72dd04d-bb06-4b3a-9f08-d68072239bd8" (UID: "a72dd04d-bb06-4b3a-9f08-d68072239bd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.021224 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mxsd\" (UniqueName: \"kubernetes.io/projected/a72dd04d-bb06-4b3a-9f08-d68072239bd8-kube-api-access-5mxsd\") pod \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\" (UID: \"a72dd04d-bb06-4b3a-9f08-d68072239bd8\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.021493 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.021509 4900 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.021517 4900 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a72dd04d-bb06-4b3a-9f08-d68072239bd8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.031905 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72dd04d-bb06-4b3a-9f08-d68072239bd8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a72dd04d-bb06-4b3a-9f08-d68072239bd8" (UID: "a72dd04d-bb06-4b3a-9f08-d68072239bd8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.035072 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72dd04d-bb06-4b3a-9f08-d68072239bd8-kube-api-access-5mxsd" (OuterVolumeSpecName: "kube-api-access-5mxsd") pod "a72dd04d-bb06-4b3a-9f08-d68072239bd8" (UID: "a72dd04d-bb06-4b3a-9f08-d68072239bd8"). InnerVolumeSpecName "kube-api-access-5mxsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.121764 4900 generic.go:334] "Generic (PLEG): container finished" podID="a72dd04d-bb06-4b3a-9f08-d68072239bd8" containerID="d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3" exitCode=0 Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.121826 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" event={"ID":"a72dd04d-bb06-4b3a-9f08-d68072239bd8","Type":"ContainerDied","Data":"d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3"} Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.121955 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8v22\" (UniqueName: \"kubernetes.io/projected/65ea3056-b990-4a94-a5aa-56a2a0f24b92-kube-api-access-j8v22\") pod \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.121984 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" event={"ID":"a72dd04d-bb06-4b3a-9f08-d68072239bd8","Type":"ContainerDied","Data":"e292cf1fbe3da66874b438ca007155905a77917a333e39c8b3a0eab3956f34b3"} Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.121863 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mq2gm" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.122069 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-config\") pod \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.122107 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ea3056-b990-4a94-a5aa-56a2a0f24b92-serving-cert\") pod \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.122145 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-client-ca\") pod \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\" (UID: \"65ea3056-b990-4a94-a5aa-56a2a0f24b92\") " Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.122362 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mxsd\" (UniqueName: \"kubernetes.io/projected/a72dd04d-bb06-4b3a-9f08-d68072239bd8-kube-api-access-5mxsd\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.122374 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72dd04d-bb06-4b3a-9f08-d68072239bd8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.123008 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-client-ca" (OuterVolumeSpecName: "client-ca") pod "65ea3056-b990-4a94-a5aa-56a2a0f24b92" (UID: "65ea3056-b990-4a94-a5aa-56a2a0f24b92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.123040 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-config" (OuterVolumeSpecName: "config") pod "65ea3056-b990-4a94-a5aa-56a2a0f24b92" (UID: "65ea3056-b990-4a94-a5aa-56a2a0f24b92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.124703 4900 generic.go:334] "Generic (PLEG): container finished" podID="65ea3056-b990-4a94-a5aa-56a2a0f24b92" containerID="f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60" exitCode=0 Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.124747 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" event={"ID":"65ea3056-b990-4a94-a5aa-56a2a0f24b92","Type":"ContainerDied","Data":"f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60"} Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.124783 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" event={"ID":"65ea3056-b990-4a94-a5aa-56a2a0f24b92","Type":"ContainerDied","Data":"2ed645e8285949297b56ce5a825c42c6f6125f5a4c284065f853ce3a6c2834bb"} Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.124845 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.126024 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ea3056-b990-4a94-a5aa-56a2a0f24b92-kube-api-access-j8v22" (OuterVolumeSpecName: "kube-api-access-j8v22") pod "65ea3056-b990-4a94-a5aa-56a2a0f24b92" (UID: "65ea3056-b990-4a94-a5aa-56a2a0f24b92"). InnerVolumeSpecName "kube-api-access-j8v22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.122006 4900 scope.go:117] "RemoveContainer" containerID="d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.127543 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ea3056-b990-4a94-a5aa-56a2a0f24b92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65ea3056-b990-4a94-a5aa-56a2a0f24b92" (UID: "65ea3056-b990-4a94-a5aa-56a2a0f24b92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.147235 4900 scope.go:117] "RemoveContainer" containerID="d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3" Dec 02 13:48:02 crc kubenswrapper[4900]: E1202 13:48:02.147915 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3\": container with ID starting with d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3 not found: ID does not exist" containerID="d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.147969 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3"} err="failed to get container status \"d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3\": rpc error: code = NotFound desc = could not find container \"d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3\": container with ID starting with d0e825568648b17925d875171503c3b562f328dfa7130f6f018afd27430014a3 not found: ID does not exist" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.148006 4900 scope.go:117] "RemoveContainer" containerID="f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.168629 4900 scope.go:117] "RemoveContainer" containerID="f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60" Dec 02 13:48:02 crc kubenswrapper[4900]: E1202 13:48:02.169022 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60\": container with ID starting with f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60 not found: ID does not exist" containerID="f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.169060 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60"} err="failed to get container status \"f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60\": rpc error: code = NotFound desc = could not find container \"f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60\": container with ID starting with f40fab5372901dc4bbac777c4d7714272f2bc4cfee1261d36e9327de711acc60 not found: ID does not exist" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.169966 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mq2gm"] Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.174743 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mq2gm"] Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.224498 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8v22\" (UniqueName: \"kubernetes.io/projected/65ea3056-b990-4a94-a5aa-56a2a0f24b92-kube-api-access-j8v22\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.224559 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.224577 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ea3056-b990-4a94-a5aa-56a2a0f24b92-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.224594 4900 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ea3056-b990-4a94-a5aa-56a2a0f24b92-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.455723 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6"] Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.458830 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qpd6"] Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.922903 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ea3056-b990-4a94-a5aa-56a2a0f24b92" path="/var/lib/kubelet/pods/65ea3056-b990-4a94-a5aa-56a2a0f24b92/volumes" Dec 02 13:48:02 crc kubenswrapper[4900]: I1202 13:48:02.924249 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72dd04d-bb06-4b3a-9f08-d68072239bd8" path="/var/lib/kubelet/pods/a72dd04d-bb06-4b3a-9f08-d68072239bd8/volumes" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.216905 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds"] Dec 02 13:48:03 crc kubenswrapper[4900]: E1202 13:48:03.217331 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ea3056-b990-4a94-a5aa-56a2a0f24b92" containerName="route-controller-manager" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.217359 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ea3056-b990-4a94-a5aa-56a2a0f24b92" containerName="route-controller-manager" Dec 02 13:48:03 crc kubenswrapper[4900]: E1202 13:48:03.217386 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.217398 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 13:48:03 crc kubenswrapper[4900]: E1202 13:48:03.217410 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72dd04d-bb06-4b3a-9f08-d68072239bd8" containerName="controller-manager" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.217421 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72dd04d-bb06-4b3a-9f08-d68072239bd8" containerName="controller-manager" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.217592 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.217612 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72dd04d-bb06-4b3a-9f08-d68072239bd8" containerName="controller-manager" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.217684 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ea3056-b990-4a94-a5aa-56a2a0f24b92" containerName="route-controller-manager" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.218251 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.224208 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d85cd544-wl8wl"] Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.225687 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.228212 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.228683 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.228834 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.228962 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.229094 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.229187 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.229529 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.230015 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.230327 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.230706 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.230985 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.238286 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.250175 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.250380 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds"] Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.253582 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d85cd544-wl8wl"] Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.346720 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-client-ca\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347053 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-config\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347187 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljzp\" (UniqueName: \"kubernetes.io/projected/800f2bea-945f-4d35-94a5-3889a856b2b1-kube-api-access-5ljzp\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347366 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/800f2bea-945f-4d35-94a5-3889a856b2b1-serving-cert\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347413 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5da985e-bb09-4210-b340-4d3dac939b57-serving-cert\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347463 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-proxy-ca-bundles\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347509 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkd7s\" (UniqueName: \"kubernetes.io/projected/d5da985e-bb09-4210-b340-4d3dac939b57-kube-api-access-nkd7s\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347552 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-config\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.347584 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-client-ca\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.449399 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/800f2bea-945f-4d35-94a5-3889a856b2b1-serving-cert\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.449730 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5da985e-bb09-4210-b340-4d3dac939b57-serving-cert\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.449872 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-proxy-ca-bundles\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.449989 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkd7s\" (UniqueName: \"kubernetes.io/projected/d5da985e-bb09-4210-b340-4d3dac939b57-kube-api-access-nkd7s\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.450101 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-config\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.450214 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-client-ca\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.450385 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-client-ca\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.450527 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-config\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.450631 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljzp\" (UniqueName: \"kubernetes.io/projected/800f2bea-945f-4d35-94a5-3889a856b2b1-kube-api-access-5ljzp\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.452258 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-client-ca\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.452301 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-client-ca\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.452504 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-proxy-ca-bundles\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.454106 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-config\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.454381 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-config\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.458758 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5da985e-bb09-4210-b340-4d3dac939b57-serving-cert\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.461667 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/800f2bea-945f-4d35-94a5-3889a856b2b1-serving-cert\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.477499 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkd7s\" (UniqueName: \"kubernetes.io/projected/d5da985e-bb09-4210-b340-4d3dac939b57-kube-api-access-nkd7s\") pod \"controller-manager-5d85cd544-wl8wl\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.483513 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljzp\" (UniqueName: \"kubernetes.io/projected/800f2bea-945f-4d35-94a5-3889a856b2b1-kube-api-access-5ljzp\") pod \"route-controller-manager-548cc76b94-476ds\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.543840 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.604544 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.814473 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds"] Dec 02 13:48:03 crc kubenswrapper[4900]: I1202 13:48:03.860148 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d85cd544-wl8wl"] Dec 02 13:48:03 crc kubenswrapper[4900]: W1202 13:48:03.868916 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5da985e_bb09_4210_b340_4d3dac939b57.slice/crio-983c6aa815023bf8bd7ffbd93456b1f50f0d5af5b46d09b4fab97ebc27882e9d WatchSource:0}: Error finding container 983c6aa815023bf8bd7ffbd93456b1f50f0d5af5b46d09b4fab97ebc27882e9d: Status 404 returned error can't find the container with id 983c6aa815023bf8bd7ffbd93456b1f50f0d5af5b46d09b4fab97ebc27882e9d Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.144999 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" event={"ID":"d5da985e-bb09-4210-b340-4d3dac939b57","Type":"ContainerStarted","Data":"8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5"} Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.147427 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" event={"ID":"d5da985e-bb09-4210-b340-4d3dac939b57","Type":"ContainerStarted","Data":"983c6aa815023bf8bd7ffbd93456b1f50f0d5af5b46d09b4fab97ebc27882e9d"} Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.147578 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.147711 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.147818 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" event={"ID":"800f2bea-945f-4d35-94a5-3889a856b2b1","Type":"ContainerStarted","Data":"b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1"} Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.147907 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" event={"ID":"800f2bea-945f-4d35-94a5-3889a856b2b1","Type":"ContainerStarted","Data":"6fd8af256f0fc26ed3d36e3f172aba3fe195806cfdc4176f12bcc811ab412951"} Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.151850 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.167003 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" podStartSLOduration=3.166978199 podStartE2EDuration="3.166978199s" podCreationTimestamp="2025-12-02 13:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:04.16555461 +0000 UTC m=+329.581368461" watchObservedRunningTime="2025-12-02 13:48:04.166978199 +0000 UTC m=+329.582792050" Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.187843 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" podStartSLOduration=3.187811862 podStartE2EDuration="3.187811862s" podCreationTimestamp="2025-12-02 13:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:04.184252074 +0000 UTC m=+329.600065925" watchObservedRunningTime="2025-12-02 13:48:04.187811862 +0000 UTC m=+329.603625733" Dec 02 13:48:04 crc kubenswrapper[4900]: I1202 13:48:04.409042 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.363921 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7ml6"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.365944 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s7ml6" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="registry-server" containerID="cri-o://3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68" gracePeriod=30 Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.377326 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sjzb"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.377944 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2sjzb" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="registry-server" containerID="cri-o://d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a" gracePeriod=30 Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.396540 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgtk7"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.396837 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" containerID="cri-o://c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738" gracePeriod=30 Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.407182 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zkbx"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.407450 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4zkbx" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="registry-server" containerID="cri-o://4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" gracePeriod=30 Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.421965 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvppg"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.422531 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvppg" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="registry-server" containerID="cri-o://afd2d17e10a45c0e607700ffa0542ee6834160795527ca24e1f29d6856731ce3" gracePeriod=30 Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.433108 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wfqvd"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.434043 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.445935 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wfqvd"] Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.540717 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.540775 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.540812 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7s4\" (UniqueName: \"kubernetes.io/projected/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-kube-api-access-mx7s4\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.641708 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.642039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7s4\" (UniqueName: \"kubernetes.io/projected/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-kube-api-access-mx7s4\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.642305 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.643314 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.649042 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.661560 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7s4\" (UniqueName: \"kubernetes.io/projected/fe7ce6ee-fda9-4b74-a46d-4918743dbeb8-kube-api-access-mx7s4\") pod \"marketplace-operator-79b997595-wfqvd\" (UID: \"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: E1202 13:48:15.738637 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6 is running failed: container process not found" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 13:48:15 crc kubenswrapper[4900]: E1202 13:48:15.739340 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6 is running failed: container process not found" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 13:48:15 crc kubenswrapper[4900]: E1202 13:48:15.739840 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6 is running failed: container process not found" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 13:48:15 crc kubenswrapper[4900]: E1202 13:48:15.739886 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4zkbx" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="registry-server" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.852834 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.859899 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.947542 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-catalog-content\") pod \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.947883 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26kk\" (UniqueName: \"kubernetes.io/projected/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-kube-api-access-m26kk\") pod \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.947962 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-utilities\") pod \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\" (UID: \"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37\") " Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.949260 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-utilities" (OuterVolumeSpecName: "utilities") pod "13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" (UID: "13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:15 crc kubenswrapper[4900]: I1202 13:48:15.965489 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-kube-api-access-m26kk" (OuterVolumeSpecName: "kube-api-access-m26kk") pod "13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" (UID: "13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37"). InnerVolumeSpecName "kube-api-access-m26kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.022882 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" (UID: "13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.044834 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.049256 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26kk\" (UniqueName: \"kubernetes.io/projected/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-kube-api-access-m26kk\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.049279 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.049290 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.056858 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.078515 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.150961 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn7gg\" (UniqueName: \"kubernetes.io/projected/de46dffd-919a-4df1-9d52-cbf1d14b8205-kube-api-access-jn7gg\") pod \"de46dffd-919a-4df1-9d52-cbf1d14b8205\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151011 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca\") pod \"de46dffd-919a-4df1-9d52-cbf1d14b8205\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151041 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsh4r\" (UniqueName: \"kubernetes.io/projected/3dc4aaac-9b9e-42e6-b943-25236645d1b2-kube-api-access-wsh4r\") pod \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151083 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-catalog-content\") pod \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151140 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-operator-metrics\") pod \"de46dffd-919a-4df1-9d52-cbf1d14b8205\" (UID: \"de46dffd-919a-4df1-9d52-cbf1d14b8205\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151200 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-utilities\") pod \"5982b283-c40f-4ff6-9ee9-55a16f1db376\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151239 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-catalog-content\") pod \"5982b283-c40f-4ff6-9ee9-55a16f1db376\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151270 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-utilities\") pod \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\" (UID: \"3dc4aaac-9b9e-42e6-b943-25236645d1b2\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.151291 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgx64\" (UniqueName: \"kubernetes.io/projected/5982b283-c40f-4ff6-9ee9-55a16f1db376-kube-api-access-qgx64\") pod \"5982b283-c40f-4ff6-9ee9-55a16f1db376\" (UID: \"5982b283-c40f-4ff6-9ee9-55a16f1db376\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.152406 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "de46dffd-919a-4df1-9d52-cbf1d14b8205" (UID: "de46dffd-919a-4df1-9d52-cbf1d14b8205"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.152457 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-utilities" (OuterVolumeSpecName: "utilities") pod "3dc4aaac-9b9e-42e6-b943-25236645d1b2" (UID: "3dc4aaac-9b9e-42e6-b943-25236645d1b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.153373 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-utilities" (OuterVolumeSpecName: "utilities") pod "5982b283-c40f-4ff6-9ee9-55a16f1db376" (UID: "5982b283-c40f-4ff6-9ee9-55a16f1db376"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.154692 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de46dffd-919a-4df1-9d52-cbf1d14b8205-kube-api-access-jn7gg" (OuterVolumeSpecName: "kube-api-access-jn7gg") pod "de46dffd-919a-4df1-9d52-cbf1d14b8205" (UID: "de46dffd-919a-4df1-9d52-cbf1d14b8205"). InnerVolumeSpecName "kube-api-access-jn7gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.154873 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5982b283-c40f-4ff6-9ee9-55a16f1db376-kube-api-access-qgx64" (OuterVolumeSpecName: "kube-api-access-qgx64") pod "5982b283-c40f-4ff6-9ee9-55a16f1db376" (UID: "5982b283-c40f-4ff6-9ee9-55a16f1db376"). InnerVolumeSpecName "kube-api-access-qgx64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.155107 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "de46dffd-919a-4df1-9d52-cbf1d14b8205" (UID: "de46dffd-919a-4df1-9d52-cbf1d14b8205"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.155363 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc4aaac-9b9e-42e6-b943-25236645d1b2-kube-api-access-wsh4r" (OuterVolumeSpecName: "kube-api-access-wsh4r") pod "3dc4aaac-9b9e-42e6-b943-25236645d1b2" (UID: "3dc4aaac-9b9e-42e6-b943-25236645d1b2"). InnerVolumeSpecName "kube-api-access-wsh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.191733 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dc4aaac-9b9e-42e6-b943-25236645d1b2" (UID: "3dc4aaac-9b9e-42e6-b943-25236645d1b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.205804 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5982b283-c40f-4ff6-9ee9-55a16f1db376" (UID: "5982b283-c40f-4ff6-9ee9-55a16f1db376"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.222550 4900 generic.go:334] "Generic (PLEG): container finished" podID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" exitCode=0 Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.222662 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zkbx" event={"ID":"3dc4aaac-9b9e-42e6-b943-25236645d1b2","Type":"ContainerDied","Data":"4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.222696 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zkbx" event={"ID":"3dc4aaac-9b9e-42e6-b943-25236645d1b2","Type":"ContainerDied","Data":"88e5088246db4f297f05cf9aa3f4d7e62c98ce07d883e362aa81760be287671d"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.222693 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zkbx" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.222716 4900 scope.go:117] "RemoveContainer" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.225603 4900 generic.go:334] "Generic (PLEG): container finished" podID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerID="3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68" exitCode=0 Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.225802 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerDied","Data":"3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.225830 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ml6" event={"ID":"5982b283-c40f-4ff6-9ee9-55a16f1db376","Type":"ContainerDied","Data":"a76989282269731c1329e73acd66e3e3b779c29784e1301b6634a816e1ccfb2b"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.225887 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ml6" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.232257 4900 generic.go:334] "Generic (PLEG): container finished" podID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerID="c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738" exitCode=0 Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.232339 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" event={"ID":"de46dffd-919a-4df1-9d52-cbf1d14b8205","Type":"ContainerDied","Data":"c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.232351 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.232371 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgtk7" event={"ID":"de46dffd-919a-4df1-9d52-cbf1d14b8205","Type":"ContainerDied","Data":"74a0d6da2851a18d3196b2877f01ae6bbfd1c4280a34eb17eaa1ba3890c6cc3a"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.238365 4900 generic.go:334] "Generic (PLEG): container finished" podID="60614c3a-d991-4156-9d83-55ab06706291" containerID="afd2d17e10a45c0e607700ffa0542ee6834160795527ca24e1f29d6856731ce3" exitCode=0 Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.238403 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerDied","Data":"afd2d17e10a45c0e607700ffa0542ee6834160795527ca24e1f29d6856731ce3"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.239783 4900 scope.go:117] "RemoveContainer" containerID="d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.240323 4900 generic.go:334] "Generic (PLEG): container finished" podID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerID="d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a" exitCode=0 Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.240358 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerDied","Data":"d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.240386 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2sjzb" event={"ID":"13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37","Type":"ContainerDied","Data":"dfd29f995880190052f8296d1017cdef77fcc3eadb87632577ebdc2eb516ce87"} Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.240559 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2sjzb" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252565 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252605 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5982b283-c40f-4ff6-9ee9-55a16f1db376-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252622 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252636 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgx64\" (UniqueName: \"kubernetes.io/projected/5982b283-c40f-4ff6-9ee9-55a16f1db376-kube-api-access-qgx64\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252674 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn7gg\" (UniqueName: \"kubernetes.io/projected/de46dffd-919a-4df1-9d52-cbf1d14b8205-kube-api-access-jn7gg\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252687 4900 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252701 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsh4r\" (UniqueName: \"kubernetes.io/projected/3dc4aaac-9b9e-42e6-b943-25236645d1b2-kube-api-access-wsh4r\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252716 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc4aaac-9b9e-42e6-b943-25236645d1b2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.252732 4900 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de46dffd-919a-4df1-9d52-cbf1d14b8205-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.256622 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zkbx"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.259393 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zkbx"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.274769 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgtk7"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.275475 4900 scope.go:117] "RemoveContainer" containerID="188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.279509 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgtk7"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.288756 4900 scope.go:117] "RemoveContainer" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.290248 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6\": container with ID starting with 4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6 not found: ID does not exist" containerID="4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.290292 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6"} err="failed to get container status \"4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6\": rpc error: code = NotFound desc = could not find container \"4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6\": container with ID starting with 4e237d6e3093cc81d8f0a33e46f367c4b10434c2146e01ef09c93fb665bb39e6 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.290332 4900 scope.go:117] "RemoveContainer" containerID="d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.290915 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800\": container with ID starting with d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800 not found: ID does not exist" containerID="d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.290939 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800"} err="failed to get container status \"d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800\": rpc error: code = NotFound desc = could not find container \"d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800\": container with ID starting with d69f1b935295e89fcde9d73508b4b101654c4c77d7f350fa96e2493f90afb800 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.290959 4900 scope.go:117] "RemoveContainer" containerID="188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.292267 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16\": container with ID starting with 188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16 not found: ID does not exist" containerID="188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.292287 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16"} err="failed to get container status \"188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16\": rpc error: code = NotFound desc = could not find container \"188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16\": container with ID starting with 188c8c612272e9b8da840e7be48577ca7a2b09776a1c1f694a0cec7db28d2d16 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.292303 4900 scope.go:117] "RemoveContainer" containerID="3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.294298 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2sjzb"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.302799 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2sjzb"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.307333 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7ml6"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.310885 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s7ml6"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.314709 4900 scope.go:117] "RemoveContainer" containerID="b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.333308 4900 scope.go:117] "RemoveContainer" containerID="7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.336617 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wfqvd"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.356565 4900 scope.go:117] "RemoveContainer" containerID="3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.364144 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68\": container with ID starting with 3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68 not found: ID does not exist" containerID="3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.364249 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68"} err="failed to get container status \"3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68\": rpc error: code = NotFound desc = could not find container \"3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68\": container with ID starting with 3d629c99db132ba2f02207af1b60297ace78a653c203f1d8f9c7e5e632279d68 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.364294 4900 scope.go:117] "RemoveContainer" containerID="b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.365929 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf\": container with ID starting with b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf not found: ID does not exist" containerID="b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.365965 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf"} err="failed to get container status \"b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf\": rpc error: code = NotFound desc = could not find container \"b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf\": container with ID starting with b85c7a7533968579b6612504d7fc635a593e532a766f17552b8b145db1daffdf not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.365982 4900 scope.go:117] "RemoveContainer" containerID="7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.366415 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84\": container with ID starting with 7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84 not found: ID does not exist" containerID="7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.366493 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84"} err="failed to get container status \"7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84\": rpc error: code = NotFound desc = could not find container \"7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84\": container with ID starting with 7adaa8671021c58b04fe41d1fde12fc70775516be7da16763304915be6f94f84 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.366562 4900 scope.go:117] "RemoveContainer" containerID="c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.386111 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.385990 4900 scope.go:117] "RemoveContainer" containerID="959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.401497 4900 scope.go:117] "RemoveContainer" containerID="c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.402080 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738\": container with ID starting with c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738 not found: ID does not exist" containerID="c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.402155 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738"} err="failed to get container status \"c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738\": rpc error: code = NotFound desc = could not find container \"c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738\": container with ID starting with c2763552330e4903b264f3da9787b81781108a205fc676b721b4bc26d02fa738 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.402189 4900 scope.go:117] "RemoveContainer" containerID="959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.407583 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc\": container with ID starting with 959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc not found: ID does not exist" containerID="959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.407676 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc"} err="failed to get container status \"959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc\": rpc error: code = NotFound desc = could not find container \"959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc\": container with ID starting with 959dd8ead61dc198e00ac442759fc6b0a17da8407222a1c1d9d3de231042b8bc not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.407718 4900 scope.go:117] "RemoveContainer" containerID="d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.429493 4900 scope.go:117] "RemoveContainer" containerID="1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.448633 4900 scope.go:117] "RemoveContainer" containerID="afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.454825 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7prj\" (UniqueName: \"kubernetes.io/projected/60614c3a-d991-4156-9d83-55ab06706291-kube-api-access-g7prj\") pod \"60614c3a-d991-4156-9d83-55ab06706291\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.454904 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-utilities\") pod \"60614c3a-d991-4156-9d83-55ab06706291\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.455019 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-catalog-content\") pod \"60614c3a-d991-4156-9d83-55ab06706291\" (UID: \"60614c3a-d991-4156-9d83-55ab06706291\") " Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.455635 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-utilities" (OuterVolumeSpecName: "utilities") pod "60614c3a-d991-4156-9d83-55ab06706291" (UID: "60614c3a-d991-4156-9d83-55ab06706291"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.457689 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60614c3a-d991-4156-9d83-55ab06706291-kube-api-access-g7prj" (OuterVolumeSpecName: "kube-api-access-g7prj") pod "60614c3a-d991-4156-9d83-55ab06706291" (UID: "60614c3a-d991-4156-9d83-55ab06706291"). InnerVolumeSpecName "kube-api-access-g7prj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.467254 4900 scope.go:117] "RemoveContainer" containerID="d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.467916 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a\": container with ID starting with d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a not found: ID does not exist" containerID="d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.467951 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a"} err="failed to get container status \"d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a\": rpc error: code = NotFound desc = could not find container \"d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a\": container with ID starting with d8a4d3acdb91f6f6c17d8dfb016aac1e7cfd7a220766e71e54f48e256747700a not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.467974 4900 scope.go:117] "RemoveContainer" containerID="1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.468397 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081\": container with ID starting with 1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081 not found: ID does not exist" containerID="1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.468439 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081"} err="failed to get container status \"1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081\": rpc error: code = NotFound desc = could not find container \"1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081\": container with ID starting with 1407ad6965ddab0c68b6b30cf761ef01449920862c0c437d9867d7240f037081 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.468470 4900 scope.go:117] "RemoveContainer" containerID="afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221" Dec 02 13:48:16 crc kubenswrapper[4900]: E1202 13:48:16.468755 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221\": container with ID starting with afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221 not found: ID does not exist" containerID="afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.468784 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221"} err="failed to get container status \"afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221\": rpc error: code = NotFound desc = could not find container \"afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221\": container with ID starting with afa5cf980562187371f887f6b2d9cb9659211b0d211ec8a66601c68c6b687221 not found: ID does not exist" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.554405 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d85cd544-wl8wl"] Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.559052 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" podUID="d5da985e-bb09-4210-b340-4d3dac939b57" containerName="controller-manager" containerID="cri-o://8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5" gracePeriod=30 Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.558832 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7prj\" (UniqueName: \"kubernetes.io/projected/60614c3a-d991-4156-9d83-55ab06706291-kube-api-access-g7prj\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.559188 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.620382 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60614c3a-d991-4156-9d83-55ab06706291" (UID: "60614c3a-d991-4156-9d83-55ab06706291"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.660066 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60614c3a-d991-4156-9d83-55ab06706291-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.918610 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" path="/var/lib/kubelet/pods/13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37/volumes" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.919467 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" path="/var/lib/kubelet/pods/3dc4aaac-9b9e-42e6-b943-25236645d1b2/volumes" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.920433 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" path="/var/lib/kubelet/pods/5982b283-c40f-4ff6-9ee9-55a16f1db376/volumes" Dec 02 13:48:16 crc kubenswrapper[4900]: I1202 13:48:16.922306 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" path="/var/lib/kubelet/pods/de46dffd-919a-4df1-9d52-cbf1d14b8205/volumes" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.004945 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.064930 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5da985e-bb09-4210-b340-4d3dac939b57-serving-cert\") pod \"d5da985e-bb09-4210-b340-4d3dac939b57\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.066285 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-proxy-ca-bundles\") pod \"d5da985e-bb09-4210-b340-4d3dac939b57\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.066336 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-client-ca\") pod \"d5da985e-bb09-4210-b340-4d3dac939b57\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.066430 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-config\") pod \"d5da985e-bb09-4210-b340-4d3dac939b57\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.066524 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkd7s\" (UniqueName: \"kubernetes.io/projected/d5da985e-bb09-4210-b340-4d3dac939b57-kube-api-access-nkd7s\") pod \"d5da985e-bb09-4210-b340-4d3dac939b57\" (UID: \"d5da985e-bb09-4210-b340-4d3dac939b57\") " Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.067533 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d5da985e-bb09-4210-b340-4d3dac939b57" (UID: "d5da985e-bb09-4210-b340-4d3dac939b57"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.067676 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-config" (OuterVolumeSpecName: "config") pod "d5da985e-bb09-4210-b340-4d3dac939b57" (UID: "d5da985e-bb09-4210-b340-4d3dac939b57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.067889 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5da985e-bb09-4210-b340-4d3dac939b57" (UID: "d5da985e-bb09-4210-b340-4d3dac939b57"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.084801 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5da985e-bb09-4210-b340-4d3dac939b57-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5da985e-bb09-4210-b340-4d3dac939b57" (UID: "d5da985e-bb09-4210-b340-4d3dac939b57"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.084869 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5da985e-bb09-4210-b340-4d3dac939b57-kube-api-access-nkd7s" (OuterVolumeSpecName: "kube-api-access-nkd7s") pod "d5da985e-bb09-4210-b340-4d3dac939b57" (UID: "d5da985e-bb09-4210-b340-4d3dac939b57"). InnerVolumeSpecName "kube-api-access-nkd7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.168406 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5da985e-bb09-4210-b340-4d3dac939b57-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.168443 4900 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.168454 4900 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.168463 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5da985e-bb09-4210-b340-4d3dac939b57-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.168475 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkd7s\" (UniqueName: \"kubernetes.io/projected/d5da985e-bb09-4210-b340-4d3dac939b57-kube-api-access-nkd7s\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181452 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jt5q6"] Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181719 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181737 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181750 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181758 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181773 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181784 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181794 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181801 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181813 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181820 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181830 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181840 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181850 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181858 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181869 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181877 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181889 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181897 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181910 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181919 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181932 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181941 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="extract-content" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181956 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181965 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="extract-utilities" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181975 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181982 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.181991 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.181996 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.182005 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5da985e-bb09-4210-b340-4d3dac939b57" containerName="controller-manager" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182012 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5da985e-bb09-4210-b340-4d3dac939b57" containerName="controller-manager" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182090 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182101 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="de46dffd-919a-4df1-9d52-cbf1d14b8205" containerName="marketplace-operator" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182109 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="13089ea5-5a5f-4e1b-9e53-9ceb0cce5e37" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182116 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5982b283-c40f-4ff6-9ee9-55a16f1db376" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182125 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc4aaac-9b9e-42e6-b943-25236645d1b2" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182134 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="60614c3a-d991-4156-9d83-55ab06706291" containerName="registry-server" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.182142 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5da985e-bb09-4210-b340-4d3dac939b57" containerName="controller-manager" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.183053 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.187160 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.205984 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt5q6"] Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.249894 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvppg" event={"ID":"60614c3a-d991-4156-9d83-55ab06706291","Type":"ContainerDied","Data":"f6bcff97c2aa8d3842e398cdcba50aa591eb8924cb0437692e7b6711ecadc733"} Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.249960 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvppg" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.249984 4900 scope.go:117] "RemoveContainer" containerID="afd2d17e10a45c0e607700ffa0542ee6834160795527ca24e1f29d6856731ce3" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.252241 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" event={"ID":"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8","Type":"ContainerStarted","Data":"90d249aa3b9ce44305fe5fbad5818507b318e66d8b39f1cf1bbc8aa0921979bd"} Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.252303 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" event={"ID":"fe7ce6ee-fda9-4b74-a46d-4918743dbeb8","Type":"ContainerStarted","Data":"330a9ac86aad6bfbab61cf5b4d4258ef33519f162f00714bafa511203f729ba9"} Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.253063 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.257087 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5da985e-bb09-4210-b340-4d3dac939b57" containerID="8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5" exitCode=0 Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.257118 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" event={"ID":"d5da985e-bb09-4210-b340-4d3dac939b57","Type":"ContainerDied","Data":"8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5"} Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.257135 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" event={"ID":"d5da985e-bb09-4210-b340-4d3dac939b57","Type":"ContainerDied","Data":"983c6aa815023bf8bd7ffbd93456b1f50f0d5af5b46d09b4fab97ebc27882e9d"} Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.257201 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d85cd544-wl8wl" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.260337 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.271151 4900 scope.go:117] "RemoveContainer" containerID="572051a3030a88f10b1d1c2fbcf019be6ef5cfd055c28db1a42560e295e6964e" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.271660 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmkq\" (UniqueName: \"kubernetes.io/projected/99ea26f5-048d-4410-ba58-83c56333dcc0-kube-api-access-xlmkq\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.271748 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ea26f5-048d-4410-ba58-83c56333dcc0-utilities\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.271813 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ea26f5-048d-4410-ba58-83c56333dcc0-catalog-content\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.291135 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wfqvd" podStartSLOduration=2.291111311 podStartE2EDuration="2.291111311s" podCreationTimestamp="2025-12-02 13:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:17.268504619 +0000 UTC m=+342.684318490" watchObservedRunningTime="2025-12-02 13:48:17.291111311 +0000 UTC m=+342.706925172" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.310898 4900 scope.go:117] "RemoveContainer" containerID="172bd9e8db11f6f4c4094cecbc79bacf707a36d0a71325c7fc7f16d9ecb7ec07" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.319745 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvppg"] Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.330717 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvppg"] Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.337440 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d85cd544-wl8wl"] Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.340272 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d85cd544-wl8wl"] Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.342709 4900 scope.go:117] "RemoveContainer" containerID="8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.355916 4900 scope.go:117] "RemoveContainer" containerID="8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5" Dec 02 13:48:17 crc kubenswrapper[4900]: E1202 13:48:17.356365 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5\": container with ID starting with 8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5 not found: ID does not exist" containerID="8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.356415 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5"} err="failed to get container status \"8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5\": rpc error: code = NotFound desc = could not find container \"8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5\": container with ID starting with 8e708aa3448a8df67c0f12b4d42cfb73fccf7355bfd02b479f6d5ef100b6b4f5 not found: ID does not exist" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.373412 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmkq\" (UniqueName: \"kubernetes.io/projected/99ea26f5-048d-4410-ba58-83c56333dcc0-kube-api-access-xlmkq\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.373480 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ea26f5-048d-4410-ba58-83c56333dcc0-utilities\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.373520 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ea26f5-048d-4410-ba58-83c56333dcc0-catalog-content\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.374956 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ea26f5-048d-4410-ba58-83c56333dcc0-catalog-content\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.374976 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ea26f5-048d-4410-ba58-83c56333dcc0-utilities\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.389976 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmkq\" (UniqueName: \"kubernetes.io/projected/99ea26f5-048d-4410-ba58-83c56333dcc0-kube-api-access-xlmkq\") pod \"certified-operators-jt5q6\" (UID: \"99ea26f5-048d-4410-ba58-83c56333dcc0\") " pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.510178 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:17 crc kubenswrapper[4900]: I1202 13:48:17.933475 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt5q6"] Dec 02 13:48:17 crc kubenswrapper[4900]: W1202 13:48:17.944706 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99ea26f5_048d_4410_ba58_83c56333dcc0.slice/crio-a858c28c4e3a040e07b6bfaea9b9864380354cc4631ca62d588530f92a10761d WatchSource:0}: Error finding container a858c28c4e3a040e07b6bfaea9b9864380354cc4631ca62d588530f92a10761d: Status 404 returned error can't find the container with id a858c28c4e3a040e07b6bfaea9b9864380354cc4631ca62d588530f92a10761d Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.178260 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hppxd"] Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.182743 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.184430 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.189056 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hppxd"] Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.228863 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-647df875d4-d8rkn"] Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.230921 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.233447 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.235018 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.235112 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.235803 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.236336 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-647df875d4-d8rkn"] Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.236368 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.236549 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.243569 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.265203 4900 generic.go:334] "Generic (PLEG): container finished" podID="99ea26f5-048d-4410-ba58-83c56333dcc0" containerID="9b4e3ca6287ef7e060798d1b7c77d2d4a0a92f3d6f399eae4116351879f5f447" exitCode=0 Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.265377 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt5q6" event={"ID":"99ea26f5-048d-4410-ba58-83c56333dcc0","Type":"ContainerDied","Data":"9b4e3ca6287ef7e060798d1b7c77d2d4a0a92f3d6f399eae4116351879f5f447"} Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.265419 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt5q6" event={"ID":"99ea26f5-048d-4410-ba58-83c56333dcc0","Type":"ContainerStarted","Data":"a858c28c4e3a040e07b6bfaea9b9864380354cc4631ca62d588530f92a10761d"} Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.287201 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-config\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.287272 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-proxy-ca-bundles\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.287318 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684d4081-a3d6-4123-8308-382f9043474d-serving-cert\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.287458 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-client-ca\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.287719 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9872ff30-bf71-4634-becb-6a860eff216f-utilities\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.287903 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6g4p\" (UniqueName: \"kubernetes.io/projected/9872ff30-bf71-4634-becb-6a860eff216f-kube-api-access-v6g4p\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.288002 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2j2g\" (UniqueName: \"kubernetes.io/projected/684d4081-a3d6-4123-8308-382f9043474d-kube-api-access-p2j2g\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.289096 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9872ff30-bf71-4634-becb-6a860eff216f-catalog-content\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.390740 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684d4081-a3d6-4123-8308-382f9043474d-serving-cert\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.390844 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-client-ca\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.390907 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9872ff30-bf71-4634-becb-6a860eff216f-utilities\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.390952 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6g4p\" (UniqueName: \"kubernetes.io/projected/9872ff30-bf71-4634-becb-6a860eff216f-kube-api-access-v6g4p\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.390985 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2j2g\" (UniqueName: \"kubernetes.io/projected/684d4081-a3d6-4123-8308-382f9043474d-kube-api-access-p2j2g\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.391015 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9872ff30-bf71-4634-becb-6a860eff216f-catalog-content\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.391066 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-config\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.391093 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-proxy-ca-bundles\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.392347 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9872ff30-bf71-4634-becb-6a860eff216f-utilities\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.393499 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9872ff30-bf71-4634-becb-6a860eff216f-catalog-content\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.394538 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-config\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.395574 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-client-ca\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.395803 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/684d4081-a3d6-4123-8308-382f9043474d-proxy-ca-bundles\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.410906 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2j2g\" (UniqueName: \"kubernetes.io/projected/684d4081-a3d6-4123-8308-382f9043474d-kube-api-access-p2j2g\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.416294 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684d4081-a3d6-4123-8308-382f9043474d-serving-cert\") pod \"controller-manager-647df875d4-d8rkn\" (UID: \"684d4081-a3d6-4123-8308-382f9043474d\") " pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.421103 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6g4p\" (UniqueName: \"kubernetes.io/projected/9872ff30-bf71-4634-becb-6a860eff216f-kube-api-access-v6g4p\") pod \"redhat-marketplace-hppxd\" (UID: \"9872ff30-bf71-4634-becb-6a860eff216f\") " pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.498751 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.549068 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.763334 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hppxd"] Dec 02 13:48:18 crc kubenswrapper[4900]: W1202 13:48:18.771718 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9872ff30_bf71_4634_becb_6a860eff216f.slice/crio-fb91f9fd1db4139c81481e03250aac95f90b6b450125ba1870008abb6f2e8527 WatchSource:0}: Error finding container fb91f9fd1db4139c81481e03250aac95f90b6b450125ba1870008abb6f2e8527: Status 404 returned error can't find the container with id fb91f9fd1db4139c81481e03250aac95f90b6b450125ba1870008abb6f2e8527 Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.920347 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60614c3a-d991-4156-9d83-55ab06706291" path="/var/lib/kubelet/pods/60614c3a-d991-4156-9d83-55ab06706291/volumes" Dec 02 13:48:18 crc kubenswrapper[4900]: I1202 13:48:18.921922 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5da985e-bb09-4210-b340-4d3dac939b57" path="/var/lib/kubelet/pods/d5da985e-bb09-4210-b340-4d3dac939b57/volumes" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.001670 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-647df875d4-d8rkn"] Dec 02 13:48:19 crc kubenswrapper[4900]: W1202 13:48:19.030734 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684d4081_a3d6_4123_8308_382f9043474d.slice/crio-cd2b120d108e30eb21623c56283c96dd29d59575cc2133c17009b72ab7748109 WatchSource:0}: Error finding container cd2b120d108e30eb21623c56283c96dd29d59575cc2133c17009b72ab7748109: Status 404 returned error can't find the container with id cd2b120d108e30eb21623c56283c96dd29d59575cc2133c17009b72ab7748109 Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.275042 4900 generic.go:334] "Generic (PLEG): container finished" podID="9872ff30-bf71-4634-becb-6a860eff216f" containerID="0170112b7d58a9932abf34539deb89b49f020da89b75b7d0330cd147c2317181" exitCode=0 Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.275103 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hppxd" event={"ID":"9872ff30-bf71-4634-becb-6a860eff216f","Type":"ContainerDied","Data":"0170112b7d58a9932abf34539deb89b49f020da89b75b7d0330cd147c2317181"} Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.275130 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hppxd" event={"ID":"9872ff30-bf71-4634-becb-6a860eff216f","Type":"ContainerStarted","Data":"fb91f9fd1db4139c81481e03250aac95f90b6b450125ba1870008abb6f2e8527"} Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.278928 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt5q6" event={"ID":"99ea26f5-048d-4410-ba58-83c56333dcc0","Type":"ContainerStarted","Data":"96224e5b904d23e177988601eb491b1bdac9de50699d52f99130a86d80c93373"} Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.281164 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" event={"ID":"684d4081-a3d6-4123-8308-382f9043474d","Type":"ContainerStarted","Data":"9dc30c437a0b9df3899bc8050bf323df2ff44735cad4023ed9242c6a08802b6a"} Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.281196 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" event={"ID":"684d4081-a3d6-4123-8308-382f9043474d","Type":"ContainerStarted","Data":"cd2b120d108e30eb21623c56283c96dd29d59575cc2133c17009b72ab7748109"} Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.281625 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.282755 4900 patch_prober.go:28] interesting pod/controller-manager-647df875d4-d8rkn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.282810 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" podUID="684d4081-a3d6-4123-8308-382f9043474d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.351684 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" podStartSLOduration=3.351666797 podStartE2EDuration="3.351666797s" podCreationTimestamp="2025-12-02 13:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:19.336455498 +0000 UTC m=+344.752269349" watchObservedRunningTime="2025-12-02 13:48:19.351666797 +0000 UTC m=+344.767480648" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.580281 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x6hx4"] Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.581570 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.584142 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.587990 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6hx4"] Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.709452 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-catalog-content\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.709748 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbd8\" (UniqueName: \"kubernetes.io/projected/5f06d1fb-7275-44f3-867d-4178c35c0952-kube-api-access-csbd8\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.709867 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-utilities\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.811182 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-catalog-content\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.811258 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csbd8\" (UniqueName: \"kubernetes.io/projected/5f06d1fb-7275-44f3-867d-4178c35c0952-kube-api-access-csbd8\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.811318 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-utilities\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.812287 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-utilities\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.812549 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-catalog-content\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.841335 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csbd8\" (UniqueName: \"kubernetes.io/projected/5f06d1fb-7275-44f3-867d-4178c35c0952-kube-api-access-csbd8\") pod \"redhat-operators-x6hx4\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:19 crc kubenswrapper[4900]: I1202 13:48:19.896111 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.294658 4900 generic.go:334] "Generic (PLEG): container finished" podID="9872ff30-bf71-4634-becb-6a860eff216f" containerID="41db19b35a7eea8f4a0038cf280616b36541f1f1e56e7fcea7381c47352defe9" exitCode=0 Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.294751 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hppxd" event={"ID":"9872ff30-bf71-4634-becb-6a860eff216f","Type":"ContainerDied","Data":"41db19b35a7eea8f4a0038cf280616b36541f1f1e56e7fcea7381c47352defe9"} Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.298437 4900 generic.go:334] "Generic (PLEG): container finished" podID="99ea26f5-048d-4410-ba58-83c56333dcc0" containerID="96224e5b904d23e177988601eb491b1bdac9de50699d52f99130a86d80c93373" exitCode=0 Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.299138 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt5q6" event={"ID":"99ea26f5-048d-4410-ba58-83c56333dcc0","Type":"ContainerDied","Data":"96224e5b904d23e177988601eb491b1bdac9de50699d52f99130a86d80c93373"} Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.305554 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-647df875d4-d8rkn" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.357959 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6hx4"] Dec 02 13:48:20 crc kubenswrapper[4900]: W1202 13:48:20.358903 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f06d1fb_7275_44f3_867d_4178c35c0952.slice/crio-0cb2ff769360cbce4bf4e1ff6fc6913ac3c42c4610710255909aeb71db979055 WatchSource:0}: Error finding container 0cb2ff769360cbce4bf4e1ff6fc6913ac3c42c4610710255909aeb71db979055: Status 404 returned error can't find the container with id 0cb2ff769360cbce4bf4e1ff6fc6913ac3c42c4610710255909aeb71db979055 Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.587179 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mp6vg"] Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.589897 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.594481 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.597506 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mp6vg"] Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.727673 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-catalog-content\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.727781 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7jl\" (UniqueName: \"kubernetes.io/projected/cb010fa9-e439-452d-9f8f-b9882850ac8a-kube-api-access-bk7jl\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.727812 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-utilities\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.829670 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-catalog-content\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.829775 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7jl\" (UniqueName: \"kubernetes.io/projected/cb010fa9-e439-452d-9f8f-b9882850ac8a-kube-api-access-bk7jl\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.829815 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-utilities\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.830618 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-utilities\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.830636 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-catalog-content\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.864311 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7jl\" (UniqueName: \"kubernetes.io/projected/cb010fa9-e439-452d-9f8f-b9882850ac8a-kube-api-access-bk7jl\") pod \"community-operators-mp6vg\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:20 crc kubenswrapper[4900]: I1202 13:48:20.917215 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:21 crc kubenswrapper[4900]: I1202 13:48:21.305320 4900 generic.go:334] "Generic (PLEG): container finished" podID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerID="ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd" exitCode=0 Dec 02 13:48:21 crc kubenswrapper[4900]: I1202 13:48:21.305424 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerDied","Data":"ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd"} Dec 02 13:48:21 crc kubenswrapper[4900]: I1202 13:48:21.306052 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerStarted","Data":"0cb2ff769360cbce4bf4e1ff6fc6913ac3c42c4610710255909aeb71db979055"} Dec 02 13:48:21 crc kubenswrapper[4900]: I1202 13:48:21.311555 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt5q6" event={"ID":"99ea26f5-048d-4410-ba58-83c56333dcc0","Type":"ContainerStarted","Data":"97d9637528daf222570bbc531332a78b7ed74a80d912b412d13137b3fe01c0b1"} Dec 02 13:48:21 crc kubenswrapper[4900]: I1202 13:48:21.358132 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mp6vg"] Dec 02 13:48:21 crc kubenswrapper[4900]: I1202 13:48:21.362870 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jt5q6" podStartSLOduration=1.7768056620000001 podStartE2EDuration="4.362852464s" podCreationTimestamp="2025-12-02 13:48:17 +0000 UTC" firstStartedPulling="2025-12-02 13:48:18.26818498 +0000 UTC m=+343.683998851" lastFinishedPulling="2025-12-02 13:48:20.854231762 +0000 UTC m=+346.270045653" observedRunningTime="2025-12-02 13:48:21.361958309 +0000 UTC m=+346.777772170" watchObservedRunningTime="2025-12-02 13:48:21.362852464 +0000 UTC m=+346.778666315" Dec 02 13:48:21 crc kubenswrapper[4900]: W1202 13:48:21.367583 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb010fa9_e439_452d_9f8f_b9882850ac8a.slice/crio-bbe7b9dfa29e19952947d16a4463fb9a88bfba114badaa762c56f5e25564212b WatchSource:0}: Error finding container bbe7b9dfa29e19952947d16a4463fb9a88bfba114badaa762c56f5e25564212b: Status 404 returned error can't find the container with id bbe7b9dfa29e19952947d16a4463fb9a88bfba114badaa762c56f5e25564212b Dec 02 13:48:22 crc kubenswrapper[4900]: I1202 13:48:22.320182 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hppxd" event={"ID":"9872ff30-bf71-4634-becb-6a860eff216f","Type":"ContainerStarted","Data":"d51b2562d12f26735b46d295a497c869ffda20aa80cb1a53f703a2b58e62bc3f"} Dec 02 13:48:22 crc kubenswrapper[4900]: I1202 13:48:22.322234 4900 generic.go:334] "Generic (PLEG): container finished" podID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerID="c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169" exitCode=0 Dec 02 13:48:22 crc kubenswrapper[4900]: I1202 13:48:22.322316 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mp6vg" event={"ID":"cb010fa9-e439-452d-9f8f-b9882850ac8a","Type":"ContainerDied","Data":"c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169"} Dec 02 13:48:22 crc kubenswrapper[4900]: I1202 13:48:22.322351 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mp6vg" event={"ID":"cb010fa9-e439-452d-9f8f-b9882850ac8a","Type":"ContainerStarted","Data":"bbe7b9dfa29e19952947d16a4463fb9a88bfba114badaa762c56f5e25564212b"} Dec 02 13:48:22 crc kubenswrapper[4900]: I1202 13:48:22.324396 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerStarted","Data":"5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3"} Dec 02 13:48:22 crc kubenswrapper[4900]: I1202 13:48:22.345314 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hppxd" podStartSLOduration=2.432944122 podStartE2EDuration="4.345297321s" podCreationTimestamp="2025-12-02 13:48:18 +0000 UTC" firstStartedPulling="2025-12-02 13:48:19.276586101 +0000 UTC m=+344.692399952" lastFinishedPulling="2025-12-02 13:48:21.1889393 +0000 UTC m=+346.604753151" observedRunningTime="2025-12-02 13:48:22.341038684 +0000 UTC m=+347.756852545" watchObservedRunningTime="2025-12-02 13:48:22.345297321 +0000 UTC m=+347.761111172" Dec 02 13:48:23 crc kubenswrapper[4900]: I1202 13:48:23.334389 4900 generic.go:334] "Generic (PLEG): container finished" podID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerID="5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3" exitCode=0 Dec 02 13:48:23 crc kubenswrapper[4900]: I1202 13:48:23.334515 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerDied","Data":"5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3"} Dec 02 13:48:23 crc kubenswrapper[4900]: I1202 13:48:23.338544 4900 generic.go:334] "Generic (PLEG): container finished" podID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerID="7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29" exitCode=0 Dec 02 13:48:23 crc kubenswrapper[4900]: I1202 13:48:23.338686 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mp6vg" event={"ID":"cb010fa9-e439-452d-9f8f-b9882850ac8a","Type":"ContainerDied","Data":"7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29"} Dec 02 13:48:24 crc kubenswrapper[4900]: I1202 13:48:24.348023 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mp6vg" event={"ID":"cb010fa9-e439-452d-9f8f-b9882850ac8a","Type":"ContainerStarted","Data":"89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c"} Dec 02 13:48:24 crc kubenswrapper[4900]: I1202 13:48:24.350443 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerStarted","Data":"e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8"} Dec 02 13:48:24 crc kubenswrapper[4900]: I1202 13:48:24.369775 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mp6vg" podStartSLOduration=2.85311432 podStartE2EDuration="4.369757063s" podCreationTimestamp="2025-12-02 13:48:20 +0000 UTC" firstStartedPulling="2025-12-02 13:48:22.323383538 +0000 UTC m=+347.739197399" lastFinishedPulling="2025-12-02 13:48:23.840026291 +0000 UTC m=+349.255840142" observedRunningTime="2025-12-02 13:48:24.365502056 +0000 UTC m=+349.781315907" watchObservedRunningTime="2025-12-02 13:48:24.369757063 +0000 UTC m=+349.785570924" Dec 02 13:48:24 crc kubenswrapper[4900]: I1202 13:48:24.383897 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x6hx4" podStartSLOduration=2.883912748 podStartE2EDuration="5.383887432s" podCreationTimestamp="2025-12-02 13:48:19 +0000 UTC" firstStartedPulling="2025-12-02 13:48:21.307187153 +0000 UTC m=+346.723001004" lastFinishedPulling="2025-12-02 13:48:23.807161797 +0000 UTC m=+349.222975688" observedRunningTime="2025-12-02 13:48:24.382677009 +0000 UTC m=+349.798490850" watchObservedRunningTime="2025-12-02 13:48:24.383887432 +0000 UTC m=+349.799701293" Dec 02 13:48:27 crc kubenswrapper[4900]: I1202 13:48:27.510963 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:27 crc kubenswrapper[4900]: I1202 13:48:27.511585 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:27 crc kubenswrapper[4900]: I1202 13:48:27.587900 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:28 crc kubenswrapper[4900]: I1202 13:48:28.428888 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jt5q6" Dec 02 13:48:28 crc kubenswrapper[4900]: I1202 13:48:28.498948 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:28 crc kubenswrapper[4900]: I1202 13:48:28.498986 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:28 crc kubenswrapper[4900]: I1202 13:48:28.562961 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:29 crc kubenswrapper[4900]: I1202 13:48:29.442946 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hppxd" Dec 02 13:48:29 crc kubenswrapper[4900]: I1202 13:48:29.897458 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:29 crc kubenswrapper[4900]: I1202 13:48:29.897527 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:29 crc kubenswrapper[4900]: I1202 13:48:29.968795 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:30 crc kubenswrapper[4900]: I1202 13:48:30.435764 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 13:48:30 crc kubenswrapper[4900]: I1202 13:48:30.919435 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:30 crc kubenswrapper[4900]: I1202 13:48:30.919490 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:30 crc kubenswrapper[4900]: I1202 13:48:30.976674 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:31 crc kubenswrapper[4900]: I1202 13:48:31.430194 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 13:48:36 crc kubenswrapper[4900]: I1202 13:48:36.554453 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds"] Dec 02 13:48:36 crc kubenswrapper[4900]: I1202 13:48:36.554931 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" podUID="800f2bea-945f-4d35-94a5-3889a856b2b1" containerName="route-controller-manager" containerID="cri-o://b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1" gracePeriod=30 Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.165785 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cx89d"] Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.166798 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.182371 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cx89d"] Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300533 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-registry-tls\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300624 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300683 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-registry-certificates\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300722 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-trusted-ca\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300768 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300839 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300874 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvcn\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-kube-api-access-4qvcn\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.300900 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-bound-sa-token\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.322167 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.332887 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.381254 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9"] Dec 02 13:48:38 crc kubenswrapper[4900]: E1202 13:48:38.381582 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800f2bea-945f-4d35-94a5-3889a856b2b1" containerName="route-controller-manager" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.381605 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="800f2bea-945f-4d35-94a5-3889a856b2b1" containerName="route-controller-manager" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.381779 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="800f2bea-945f-4d35-94a5-3889a856b2b1" containerName="route-controller-manager" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.382254 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.387713 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9"] Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.401721 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ljzp\" (UniqueName: \"kubernetes.io/projected/800f2bea-945f-4d35-94a5-3889a856b2b1-kube-api-access-5ljzp\") pod \"800f2bea-945f-4d35-94a5-3889a856b2b1\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.401781 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-config\") pod \"800f2bea-945f-4d35-94a5-3889a856b2b1\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.401803 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-client-ca\") pod \"800f2bea-945f-4d35-94a5-3889a856b2b1\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.401840 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/800f2bea-945f-4d35-94a5-3889a856b2b1-serving-cert\") pod \"800f2bea-945f-4d35-94a5-3889a856b2b1\" (UID: \"800f2bea-945f-4d35-94a5-3889a856b2b1\") " Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402172 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402237 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-registry-certificates\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402270 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-trusted-ca\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402321 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402349 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qvcn\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-kube-api-access-4qvcn\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402367 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-bound-sa-token\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.402389 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-registry-tls\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.403090 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-config" (OuterVolumeSpecName: "config") pod "800f2bea-945f-4d35-94a5-3889a856b2b1" (UID: "800f2bea-945f-4d35-94a5-3889a856b2b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.404968 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-registry-certificates\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.405339 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "800f2bea-945f-4d35-94a5-3889a856b2b1" (UID: "800f2bea-945f-4d35-94a5-3889a856b2b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.405760 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.408070 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-trusted-ca\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.410178 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800f2bea-945f-4d35-94a5-3889a856b2b1-kube-api-access-5ljzp" (OuterVolumeSpecName: "kube-api-access-5ljzp") pod "800f2bea-945f-4d35-94a5-3889a856b2b1" (UID: "800f2bea-945f-4d35-94a5-3889a856b2b1"). InnerVolumeSpecName "kube-api-access-5ljzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.413703 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.416839 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800f2bea-945f-4d35-94a5-3889a856b2b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "800f2bea-945f-4d35-94a5-3889a856b2b1" (UID: "800f2bea-945f-4d35-94a5-3889a856b2b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.418697 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-registry-tls\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.430357 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-bound-sa-token\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.434939 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qvcn\" (UniqueName: \"kubernetes.io/projected/82a8a72c-0859-415e-a8ee-d4cf2c7a772e-kube-api-access-4qvcn\") pod \"image-registry-66df7c8f76-cx89d\" (UID: \"82a8a72c-0859-415e-a8ee-d4cf2c7a772e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.435948 4900 generic.go:334] "Generic (PLEG): container finished" podID="800f2bea-945f-4d35-94a5-3889a856b2b1" containerID="b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1" exitCode=0 Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.436012 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" event={"ID":"800f2bea-945f-4d35-94a5-3889a856b2b1","Type":"ContainerDied","Data":"b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1"} Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.436036 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.436106 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds" event={"ID":"800f2bea-945f-4d35-94a5-3889a856b2b1","Type":"ContainerDied","Data":"6fd8af256f0fc26ed3d36e3f172aba3fe195806cfdc4176f12bcc811ab412951"} Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.436160 4900 scope.go:117] "RemoveContainer" containerID="b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.458208 4900 scope.go:117] "RemoveContainer" containerID="b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1" Dec 02 13:48:38 crc kubenswrapper[4900]: E1202 13:48:38.459261 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1\": container with ID starting with b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1 not found: ID does not exist" containerID="b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.459302 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1"} err="failed to get container status \"b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1\": rpc error: code = NotFound desc = could not find container \"b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1\": container with ID starting with b1ab887e02bbe03b05238b30e4f49ff72e73590b3f943f5d2476fecfd220c5e1 not found: ID does not exist" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.470515 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds"] Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.473424 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-548cc76b94-476ds"] Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.487030 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.503923 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45h56\" (UniqueName: \"kubernetes.io/projected/45b8452b-b22c-435e-9325-ce9102b1826c-kube-api-access-45h56\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.503967 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b8452b-b22c-435e-9325-ce9102b1826c-client-ca\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.504011 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b8452b-b22c-435e-9325-ce9102b1826c-serving-cert\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.504041 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b8452b-b22c-435e-9325-ce9102b1826c-config\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.504091 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ljzp\" (UniqueName: \"kubernetes.io/projected/800f2bea-945f-4d35-94a5-3889a856b2b1-kube-api-access-5ljzp\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.504102 4900 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.504112 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800f2bea-945f-4d35-94a5-3889a856b2b1-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.504120 4900 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/800f2bea-945f-4d35-94a5-3889a856b2b1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.606355 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b8452b-b22c-435e-9325-ce9102b1826c-config\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.606444 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45h56\" (UniqueName: \"kubernetes.io/projected/45b8452b-b22c-435e-9325-ce9102b1826c-kube-api-access-45h56\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.606472 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b8452b-b22c-435e-9325-ce9102b1826c-client-ca\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.606508 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b8452b-b22c-435e-9325-ce9102b1826c-serving-cert\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.610769 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b8452b-b22c-435e-9325-ce9102b1826c-client-ca\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.612417 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b8452b-b22c-435e-9325-ce9102b1826c-serving-cert\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.609217 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b8452b-b22c-435e-9325-ce9102b1826c-config\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.630992 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45h56\" (UniqueName: \"kubernetes.io/projected/45b8452b-b22c-435e-9325-ce9102b1826c-kube-api-access-45h56\") pod \"route-controller-manager-f4c688dfb-cc8z9\" (UID: \"45b8452b-b22c-435e-9325-ce9102b1826c\") " pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.701817 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.921668 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800f2bea-945f-4d35-94a5-3889a856b2b1" path="/var/lib/kubelet/pods/800f2bea-945f-4d35-94a5-3889a856b2b1/volumes" Dec 02 13:48:38 crc kubenswrapper[4900]: I1202 13:48:38.931359 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cx89d"] Dec 02 13:48:38 crc kubenswrapper[4900]: W1202 13:48:38.938437 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82a8a72c_0859_415e_a8ee_d4cf2c7a772e.slice/crio-b007ee614e1dd7005500fe5a5734073be56ee0b721d742f7735d697ab505e88d WatchSource:0}: Error finding container b007ee614e1dd7005500fe5a5734073be56ee0b721d742f7735d697ab505e88d: Status 404 returned error can't find the container with id b007ee614e1dd7005500fe5a5734073be56ee0b721d742f7735d697ab505e88d Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.151348 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9"] Dec 02 13:48:39 crc kubenswrapper[4900]: W1202 13:48:39.175506 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45b8452b_b22c_435e_9325_ce9102b1826c.slice/crio-db0bb4f4f9dc3c480f95002942470b9c7a0ea418098f74fca2c25fa0a8841fa0 WatchSource:0}: Error finding container db0bb4f4f9dc3c480f95002942470b9c7a0ea418098f74fca2c25fa0a8841fa0: Status 404 returned error can't find the container with id db0bb4f4f9dc3c480f95002942470b9c7a0ea418098f74fca2c25fa0a8841fa0 Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.447039 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" event={"ID":"45b8452b-b22c-435e-9325-ce9102b1826c","Type":"ContainerStarted","Data":"f6f2c92e056f13e8945bbf494d6f7bca3a8c748d9fc9fcff406690083ea5f8a0"} Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.447538 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" event={"ID":"45b8452b-b22c-435e-9325-ce9102b1826c","Type":"ContainerStarted","Data":"db0bb4f4f9dc3c480f95002942470b9c7a0ea418098f74fca2c25fa0a8841fa0"} Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.448187 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.451838 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" event={"ID":"82a8a72c-0859-415e-a8ee-d4cf2c7a772e","Type":"ContainerStarted","Data":"5fa2bf05cf9e7554fb395604de41498bf77ac224ec28732442fc34b92ced8bce"} Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.451895 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" event={"ID":"82a8a72c-0859-415e-a8ee-d4cf2c7a772e","Type":"ContainerStarted","Data":"b007ee614e1dd7005500fe5a5734073be56ee0b721d742f7735d697ab505e88d"} Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.452542 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.478584 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" podStartSLOduration=3.478558194 podStartE2EDuration="3.478558194s" podCreationTimestamp="2025-12-02 13:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:39.472417315 +0000 UTC m=+364.888231166" watchObservedRunningTime="2025-12-02 13:48:39.478558194 +0000 UTC m=+364.894372085" Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.503818 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" podStartSLOduration=1.5038003290000002 podStartE2EDuration="1.503800329s" podCreationTimestamp="2025-12-02 13:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:39.502097862 +0000 UTC m=+364.917911713" watchObservedRunningTime="2025-12-02 13:48:39.503800329 +0000 UTC m=+364.919614180" Dec 02 13:48:39 crc kubenswrapper[4900]: I1202 13:48:39.853490 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f4c688dfb-cc8z9" Dec 02 13:48:45 crc kubenswrapper[4900]: I1202 13:48:45.116636 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:48:45 crc kubenswrapper[4900]: I1202 13:48:45.117371 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:48:58 crc kubenswrapper[4900]: I1202 13:48:58.501738 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cx89d" Dec 02 13:48:58 crc kubenswrapper[4900]: I1202 13:48:58.594704 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xffzf"] Dec 02 13:49:15 crc kubenswrapper[4900]: I1202 13:49:15.116631 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:49:15 crc kubenswrapper[4900]: I1202 13:49:15.117443 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:49:23 crc kubenswrapper[4900]: I1202 13:49:23.656199 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" podUID="ff406d69-c78d-478d-947c-c1b9ae6ae503" containerName="registry" containerID="cri-o://d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1" gracePeriod=30 Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.164987 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179183 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlpc9\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-kube-api-access-tlpc9\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179270 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff406d69-c78d-478d-947c-c1b9ae6ae503-ca-trust-extracted\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179300 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff406d69-c78d-478d-947c-c1b9ae6ae503-installation-pull-secrets\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179319 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-bound-sa-token\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179346 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-certificates\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179464 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179507 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-trusted-ca\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.179543 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-tls\") pod \"ff406d69-c78d-478d-947c-c1b9ae6ae503\" (UID: \"ff406d69-c78d-478d-947c-c1b9ae6ae503\") " Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.181088 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.181011 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.186612 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-kube-api-access-tlpc9" (OuterVolumeSpecName: "kube-api-access-tlpc9") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "kube-api-access-tlpc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.187708 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.188098 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff406d69-c78d-478d-947c-c1b9ae6ae503-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.198151 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.205467 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.226716 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff406d69-c78d-478d-947c-c1b9ae6ae503-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ff406d69-c78d-478d-947c-c1b9ae6ae503" (UID: "ff406d69-c78d-478d-947c-c1b9ae6ae503"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.280977 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.281030 4900 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.281052 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlpc9\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-kube-api-access-tlpc9\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.281075 4900 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff406d69-c78d-478d-947c-c1b9ae6ae503-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.281097 4900 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff406d69-c78d-478d-947c-c1b9ae6ae503-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.281116 4900 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff406d69-c78d-478d-947c-c1b9ae6ae503-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.281142 4900 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff406d69-c78d-478d-947c-c1b9ae6ae503-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.782724 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff406d69-c78d-478d-947c-c1b9ae6ae503" containerID="d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1" exitCode=0 Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.782791 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" event={"ID":"ff406d69-c78d-478d-947c-c1b9ae6ae503","Type":"ContainerDied","Data":"d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1"} Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.783090 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" event={"ID":"ff406d69-c78d-478d-947c-c1b9ae6ae503","Type":"ContainerDied","Data":"765c375ee6d31cb96cda5b07c555f65bb70d0a9fce86b48f6a1bf577a33b48a9"} Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.783115 4900 scope.go:117] "RemoveContainer" containerID="d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.782824 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xffzf" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.804980 4900 scope.go:117] "RemoveContainer" containerID="d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1" Dec 02 13:49:24 crc kubenswrapper[4900]: E1202 13:49:24.805473 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1\": container with ID starting with d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1 not found: ID does not exist" containerID="d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.805533 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1"} err="failed to get container status \"d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1\": rpc error: code = NotFound desc = could not find container \"d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1\": container with ID starting with d95cd92571b4cd73c5a10143b9b78de4cbf28477cb9d528ecb7f2930dbc628f1 not found: ID does not exist" Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.824300 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xffzf"] Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.829454 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xffzf"] Dec 02 13:49:24 crc kubenswrapper[4900]: I1202 13:49:24.921579 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff406d69-c78d-478d-947c-c1b9ae6ae503" path="/var/lib/kubelet/pods/ff406d69-c78d-478d-947c-c1b9ae6ae503/volumes" Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.116524 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.116952 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.116990 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.117426 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99208c08de62263a05d161e78ca2b735d405123b0b78d98c975543243600a6ba"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.117476 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://99208c08de62263a05d161e78ca2b735d405123b0b78d98c975543243600a6ba" gracePeriod=600 Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.924019 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="99208c08de62263a05d161e78ca2b735d405123b0b78d98c975543243600a6ba" exitCode=0 Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.924107 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"99208c08de62263a05d161e78ca2b735d405123b0b78d98c975543243600a6ba"} Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.925093 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"d6a7de400caf117429d90260321e7369a106edc882963bd6b93427292ee894ba"} Dec 02 13:49:45 crc kubenswrapper[4900]: I1202 13:49:45.925190 4900 scope.go:117] "RemoveContainer" containerID="3bb3e70ac468b74676b7c8ddee04017d005c7d7dfadde4d46e43f305ba2e64a3" Dec 02 13:51:45 crc kubenswrapper[4900]: I1202 13:51:45.116575 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:51:45 crc kubenswrapper[4900]: I1202 13:51:45.117363 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:52:15 crc kubenswrapper[4900]: I1202 13:52:15.116595 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:52:15 crc kubenswrapper[4900]: I1202 13:52:15.117313 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:52:45 crc kubenswrapper[4900]: I1202 13:52:45.116362 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:52:45 crc kubenswrapper[4900]: I1202 13:52:45.116924 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:52:45 crc kubenswrapper[4900]: I1202 13:52:45.116975 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:52:45 crc kubenswrapper[4900]: I1202 13:52:45.118359 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6a7de400caf117429d90260321e7369a106edc882963bd6b93427292ee894ba"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:52:45 crc kubenswrapper[4900]: I1202 13:52:45.118432 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://d6a7de400caf117429d90260321e7369a106edc882963bd6b93427292ee894ba" gracePeriod=600 Dec 02 13:52:46 crc kubenswrapper[4900]: I1202 13:52:46.156521 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="d6a7de400caf117429d90260321e7369a106edc882963bd6b93427292ee894ba" exitCode=0 Dec 02 13:52:46 crc kubenswrapper[4900]: I1202 13:52:46.156750 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"d6a7de400caf117429d90260321e7369a106edc882963bd6b93427292ee894ba"} Dec 02 13:52:46 crc kubenswrapper[4900]: I1202 13:52:46.157014 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"a09a1c5505ad87f53094560a2e55e53cf8b4e88f885e7ed1b8c3af7bddb65c71"} Dec 02 13:52:46 crc kubenswrapper[4900]: I1202 13:52:46.157050 4900 scope.go:117] "RemoveContainer" containerID="99208c08de62263a05d161e78ca2b735d405123b0b78d98c975543243600a6ba" Dec 02 13:54:45 crc kubenswrapper[4900]: I1202 13:54:45.117057 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:54:45 crc kubenswrapper[4900]: I1202 13:54:45.117833 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:55:11 crc kubenswrapper[4900]: I1202 13:55:11.268598 4900 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 13:55:15 crc kubenswrapper[4900]: I1202 13:55:15.116946 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:55:15 crc kubenswrapper[4900]: I1202 13:55:15.117795 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.752050 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-88rnd"] Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762495 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-controller" containerID="cri-o://05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762590 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="nbdb" containerID="cri-o://b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762619 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="sbdb" containerID="cri-o://db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762724 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-node" containerID="cri-o://70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762736 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762767 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-acl-logging" containerID="cri-o://9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.762792 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="northd" containerID="cri-o://04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" gracePeriod=30 Dec 02 13:55:16 crc kubenswrapper[4900]: I1202 13:55:16.803488 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" containerID="cri-o://5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" gracePeriod=30 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.110335 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/3.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.113310 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovn-acl-logging/0.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.113909 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovn-controller/0.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.114599 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.176632 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgkzk"] Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177237 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177257 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177274 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-node" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177282 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-node" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177293 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff406d69-c78d-478d-947c-c1b9ae6ae503" containerName="registry" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177301 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff406d69-c78d-478d-947c-c1b9ae6ae503" containerName="registry" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177320 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="northd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177328 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="northd" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177343 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kubecfg-setup" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177351 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kubecfg-setup" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177361 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="sbdb" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177370 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="sbdb" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177381 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177391 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177404 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177416 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177431 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177442 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177454 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-acl-logging" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177464 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-acl-logging" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177480 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="nbdb" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177488 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="nbdb" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177503 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177511 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177522 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177530 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177662 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177685 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177695 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="nbdb" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177709 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovn-acl-logging" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177719 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="sbdb" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177732 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177744 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177755 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="northd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177766 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff406d69-c78d-478d-947c-c1b9ae6ae503" containerName="registry" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177777 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-node" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177788 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.177921 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.177931 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.178046 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.178065 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerName="ovnkube-controller" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.195586 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.256722 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.256959 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-etc-openvswitch\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257062 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-netd\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257122 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257160 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-node-log\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257211 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257293 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-log-socket\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257340 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4d72\" (UniqueName: \"kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257381 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-kubelet\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257404 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-ovn-kubernetes\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257400 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-log-socket" (OuterVolumeSpecName: "log-socket") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257447 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257486 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257520 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-env-overrides\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257545 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257619 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-bin\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257666 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-systemd-units\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257714 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-systemd\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257787 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-ovn\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257822 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-slash\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257872 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-netns\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257700 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257741 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257847 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.257912 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-slash" (OuterVolumeSpecName: "host-slash") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258005 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258126 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-var-lib-cni-networks-ovn-kubernetes\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258128 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-node-log" (OuterVolumeSpecName: "node-log") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258158 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258227 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258197 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-var-lib-openvswitch\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258339 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-openvswitch\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258391 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovn-node-metrics-cert\") pod \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\" (UID: \"338f7f04-2450-4efb-a2e7-3c0e13eb8998\") " Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258452 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258562 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-var-lib-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258627 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovn-node-metrics-cert\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258717 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-node-log\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258752 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovnkube-config\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258824 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258861 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-slash\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.258921 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-cni-bin\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259000 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-kubelet\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259042 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-ovn\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259081 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-systemd-units\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259143 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259173 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovnkube-script-lib\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259209 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-cni-netd\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259239 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hspw5\" (UniqueName: \"kubernetes.io/projected/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-kube-api-access-hspw5\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259251 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259335 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-run-netns\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259371 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259414 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-etc-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259437 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-env-overrides\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259529 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259580 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-systemd\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259613 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-log-socket\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259699 4900 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259715 4900 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259729 4900 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259742 4900 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-node-log\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259754 4900 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-log-socket\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259768 4900 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259781 4900 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259793 4900 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259805 4900 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259817 4900 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259830 4900 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259843 4900 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-slash\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259855 4900 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259867 4900 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259880 4900 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.259894 4900 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.260338 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.264685 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.265411 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72" (OuterVolumeSpecName: "kube-api-access-z4d72") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "kube-api-access-z4d72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.266636 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/2.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.267143 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/1.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.267188 4900 generic.go:334] "Generic (PLEG): container finished" podID="7cacd7d0-a1a1-4ea0-b918-a73c8220e500" containerID="3f1d3316a23a35820d847ba051ae6244de8214e97b45c832a2f23ac699e8cf53" exitCode=2 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.267255 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerDied","Data":"3f1d3316a23a35820d847ba051ae6244de8214e97b45c832a2f23ac699e8cf53"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.267300 4900 scope.go:117] "RemoveContainer" containerID="ea7dea40cfa7b7927bd5d05d66b6349f7e95acfae27fad3f757abe4cf8d9c0a8" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.267927 4900 scope.go:117] "RemoveContainer" containerID="3f1d3316a23a35820d847ba051ae6244de8214e97b45c832a2f23ac699e8cf53" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.280991 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovnkube-controller/3.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.283876 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "338f7f04-2450-4efb-a2e7-3c0e13eb8998" (UID: "338f7f04-2450-4efb-a2e7-3c0e13eb8998"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.284314 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovn-acl-logging/0.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285253 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-88rnd_338f7f04-2450-4efb-a2e7-3c0e13eb8998/ovn-controller/0.log" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285857 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" exitCode=0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285894 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" exitCode=0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285903 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" exitCode=0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285912 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" exitCode=0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285924 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" exitCode=0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285930 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" exitCode=0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285936 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" exitCode=143 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285943 4900 generic.go:334] "Generic (PLEG): container finished" podID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" exitCode=143 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285968 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.285996 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286008 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286024 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286034 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286046 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286058 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286074 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286081 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286090 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286099 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286107 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286116 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286123 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286131 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286137 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286146 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286158 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286167 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286175 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286195 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286203 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286210 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286216 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286221 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286226 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286231 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286239 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286247 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286254 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286259 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286264 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286269 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286275 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286280 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286285 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286290 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286295 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286302 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" event={"ID":"338f7f04-2450-4efb-a2e7-3c0e13eb8998","Type":"ContainerDied","Data":"04410f51f2d59ec9e3e78ea84ec3a051b020b87a665c3c265986f1f6689d272a"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286309 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286315 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286320 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286325 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286331 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286338 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286344 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286350 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286356 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286362 4900 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.286059 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-88rnd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.311700 4900 scope.go:117] "RemoveContainer" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.333044 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-88rnd"] Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.340589 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.341746 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-88rnd"] Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361245 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-log-socket\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361320 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-var-lib-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361364 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovn-node-metrics-cert\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361391 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-node-log\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361445 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-var-lib-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361459 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovnkube-config\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361551 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-node-log\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361593 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-slash\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361568 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-slash\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361691 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-cni-bin\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361722 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-kubelet\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361747 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-ovn\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361787 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-systemd-units\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361876 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361909 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovnkube-script-lib\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361941 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-cni-netd\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.361977 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hspw5\" (UniqueName: \"kubernetes.io/projected/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-kube-api-access-hspw5\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362016 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-run-netns\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362047 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362098 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-env-overrides\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362122 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-etc-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362165 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362217 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-systemd\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362309 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-kubelet\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362311 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-ovn\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362374 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-systemd-units\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362469 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovnkube-config\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362676 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-systemd\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362676 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362696 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-cni-bin\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362716 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-run-netns\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362735 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-run-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362827 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-etc-openvswitch\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362851 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362884 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-host-cni-netd\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362951 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/338f7f04-2450-4efb-a2e7-3c0e13eb8998-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362965 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4d72\" (UniqueName: \"kubernetes.io/projected/338f7f04-2450-4efb-a2e7-3c0e13eb8998-kube-api-access-z4d72\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362980 4900 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/338f7f04-2450-4efb-a2e7-3c0e13eb8998-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.362990 4900 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/338f7f04-2450-4efb-a2e7-3c0e13eb8998-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.363467 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-env-overrides\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.363582 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovnkube-script-lib\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.363657 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-log-socket\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.367774 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-ovn-node-metrics-cert\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.376112 4900 scope.go:117] "RemoveContainer" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.385991 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hspw5\" (UniqueName: \"kubernetes.io/projected/c56ff65d-37ed-4b68-8b79-a69b7adc5c05-kube-api-access-hspw5\") pod \"ovnkube-node-qgkzk\" (UID: \"c56ff65d-37ed-4b68-8b79-a69b7adc5c05\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.402297 4900 scope.go:117] "RemoveContainer" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.421989 4900 scope.go:117] "RemoveContainer" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.441428 4900 scope.go:117] "RemoveContainer" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.465419 4900 scope.go:117] "RemoveContainer" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.483923 4900 scope.go:117] "RemoveContainer" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.506510 4900 scope.go:117] "RemoveContainer" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.519299 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.522613 4900 scope.go:117] "RemoveContainer" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.555291 4900 scope.go:117] "RemoveContainer" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.557572 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": container with ID starting with 5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2 not found: ID does not exist" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.557703 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} err="failed to get container status \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": rpc error: code = NotFound desc = could not find container \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": container with ID starting with 5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.557752 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.558233 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": container with ID starting with efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c not found: ID does not exist" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.558271 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} err="failed to get container status \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": rpc error: code = NotFound desc = could not find container \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": container with ID starting with efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.558295 4900 scope.go:117] "RemoveContainer" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.558808 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": container with ID starting with db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2 not found: ID does not exist" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.558845 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} err="failed to get container status \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": rpc error: code = NotFound desc = could not find container \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": container with ID starting with db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.558869 4900 scope.go:117] "RemoveContainer" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.559219 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": container with ID starting with b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd not found: ID does not exist" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.559259 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} err="failed to get container status \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": rpc error: code = NotFound desc = could not find container \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": container with ID starting with b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.559285 4900 scope.go:117] "RemoveContainer" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.560113 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": container with ID starting with 04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9 not found: ID does not exist" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.560167 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} err="failed to get container status \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": rpc error: code = NotFound desc = could not find container \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": container with ID starting with 04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.560192 4900 scope.go:117] "RemoveContainer" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.560575 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": container with ID starting with 925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b not found: ID does not exist" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.560614 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} err="failed to get container status \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": rpc error: code = NotFound desc = could not find container \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": container with ID starting with 925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.560666 4900 scope.go:117] "RemoveContainer" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.561437 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": container with ID starting with 70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d not found: ID does not exist" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.561478 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} err="failed to get container status \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": rpc error: code = NotFound desc = could not find container \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": container with ID starting with 70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.561509 4900 scope.go:117] "RemoveContainer" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.562329 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": container with ID starting with 9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0 not found: ID does not exist" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.562371 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} err="failed to get container status \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": rpc error: code = NotFound desc = could not find container \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": container with ID starting with 9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.562403 4900 scope.go:117] "RemoveContainer" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.563452 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": container with ID starting with 05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c not found: ID does not exist" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.563482 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} err="failed to get container status \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": rpc error: code = NotFound desc = could not find container \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": container with ID starting with 05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.563505 4900 scope.go:117] "RemoveContainer" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" Dec 02 13:55:17 crc kubenswrapper[4900]: E1202 13:55:17.564801 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": container with ID starting with c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d not found: ID does not exist" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.564849 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} err="failed to get container status \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": rpc error: code = NotFound desc = could not find container \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": container with ID starting with c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.564884 4900 scope.go:117] "RemoveContainer" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.565619 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} err="failed to get container status \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": rpc error: code = NotFound desc = could not find container \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": container with ID starting with 5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.565683 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.567147 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} err="failed to get container status \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": rpc error: code = NotFound desc = could not find container \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": container with ID starting with efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.567183 4900 scope.go:117] "RemoveContainer" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.568110 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} err="failed to get container status \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": rpc error: code = NotFound desc = could not find container \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": container with ID starting with db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.568215 4900 scope.go:117] "RemoveContainer" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.569167 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} err="failed to get container status \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": rpc error: code = NotFound desc = could not find container \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": container with ID starting with b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.569215 4900 scope.go:117] "RemoveContainer" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.569852 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} err="failed to get container status \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": rpc error: code = NotFound desc = could not find container \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": container with ID starting with 04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.569880 4900 scope.go:117] "RemoveContainer" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.570821 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} err="failed to get container status \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": rpc error: code = NotFound desc = could not find container \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": container with ID starting with 925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.570845 4900 scope.go:117] "RemoveContainer" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.571534 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} err="failed to get container status \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": rpc error: code = NotFound desc = could not find container \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": container with ID starting with 70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.571610 4900 scope.go:117] "RemoveContainer" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.572149 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} err="failed to get container status \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": rpc error: code = NotFound desc = could not find container \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": container with ID starting with 9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.572213 4900 scope.go:117] "RemoveContainer" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.572779 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} err="failed to get container status \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": rpc error: code = NotFound desc = could not find container \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": container with ID starting with 05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.572812 4900 scope.go:117] "RemoveContainer" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.573091 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} err="failed to get container status \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": rpc error: code = NotFound desc = could not find container \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": container with ID starting with c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.573115 4900 scope.go:117] "RemoveContainer" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.573366 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} err="failed to get container status \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": rpc error: code = NotFound desc = could not find container \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": container with ID starting with 5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.573404 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.573768 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} err="failed to get container status \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": rpc error: code = NotFound desc = could not find container \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": container with ID starting with efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.573800 4900 scope.go:117] "RemoveContainer" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.574224 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} err="failed to get container status \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": rpc error: code = NotFound desc = could not find container \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": container with ID starting with db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.574255 4900 scope.go:117] "RemoveContainer" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.574706 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} err="failed to get container status \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": rpc error: code = NotFound desc = could not find container \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": container with ID starting with b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.574742 4900 scope.go:117] "RemoveContainer" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.575151 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} err="failed to get container status \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": rpc error: code = NotFound desc = could not find container \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": container with ID starting with 04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.575177 4900 scope.go:117] "RemoveContainer" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.575757 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} err="failed to get container status \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": rpc error: code = NotFound desc = could not find container \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": container with ID starting with 925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.575840 4900 scope.go:117] "RemoveContainer" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" Dec 02 13:55:17 crc kubenswrapper[4900]: W1202 13:55:17.576352 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc56ff65d_37ed_4b68_8b79_a69b7adc5c05.slice/crio-c14560d8d434de004fc71d85db37044820e4f708a1eaab15de78b454ab59b4a0 WatchSource:0}: Error finding container c14560d8d434de004fc71d85db37044820e4f708a1eaab15de78b454ab59b4a0: Status 404 returned error can't find the container with id c14560d8d434de004fc71d85db37044820e4f708a1eaab15de78b454ab59b4a0 Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.576525 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} err="failed to get container status \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": rpc error: code = NotFound desc = could not find container \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": container with ID starting with 70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.576557 4900 scope.go:117] "RemoveContainer" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.577198 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} err="failed to get container status \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": rpc error: code = NotFound desc = could not find container \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": container with ID starting with 9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.577220 4900 scope.go:117] "RemoveContainer" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.577496 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} err="failed to get container status \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": rpc error: code = NotFound desc = could not find container \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": container with ID starting with 05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.577523 4900 scope.go:117] "RemoveContainer" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.578121 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} err="failed to get container status \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": rpc error: code = NotFound desc = could not find container \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": container with ID starting with c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.578153 4900 scope.go:117] "RemoveContainer" containerID="5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.578572 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2"} err="failed to get container status \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": rpc error: code = NotFound desc = could not find container \"5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2\": container with ID starting with 5c295b545ee7000055bc53265e53799fddb2faf7baf3cbc4aefb40549eab0cf2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.578604 4900 scope.go:117] "RemoveContainer" containerID="efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.579042 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c"} err="failed to get container status \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": rpc error: code = NotFound desc = could not find container \"efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c\": container with ID starting with efeea340bcc7bea47b9f73bf4e340370a4d5026a90803c18ccbd6d101f96728c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.579067 4900 scope.go:117] "RemoveContainer" containerID="db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.579407 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2"} err="failed to get container status \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": rpc error: code = NotFound desc = could not find container \"db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2\": container with ID starting with db65b7a3b8f7e8b5d9fd37bbbce1e94de0af0cd3e8343047a4b84f06a84eadd2 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.579436 4900 scope.go:117] "RemoveContainer" containerID="b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.580387 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd"} err="failed to get container status \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": rpc error: code = NotFound desc = could not find container \"b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd\": container with ID starting with b41b146b34f8283eeaeb6bf1291916d992362a6f2788b55ba87d23bd9dc554cd not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.580416 4900 scope.go:117] "RemoveContainer" containerID="04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.582571 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9"} err="failed to get container status \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": rpc error: code = NotFound desc = could not find container \"04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9\": container with ID starting with 04842923e6923454b46dda19f62722ac4cbea15a9583d526a08c2f4410da7da9 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.582599 4900 scope.go:117] "RemoveContainer" containerID="925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.587232 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b"} err="failed to get container status \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": rpc error: code = NotFound desc = could not find container \"925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b\": container with ID starting with 925ee270b8f808fdf623c4498b712b9ed366922538ed25b1d8170a2b6c55062b not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.587261 4900 scope.go:117] "RemoveContainer" containerID="70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.587956 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d"} err="failed to get container status \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": rpc error: code = NotFound desc = could not find container \"70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d\": container with ID starting with 70034ef79edbde53b75e2402efa38f237f380f314d7be4ef6519781a75a7c41d not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.588016 4900 scope.go:117] "RemoveContainer" containerID="9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.588990 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0"} err="failed to get container status \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": rpc error: code = NotFound desc = could not find container \"9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0\": container with ID starting with 9567839be10d88a60857e32c387c021f85d9b1891b6266db207be0557c4f2dd0 not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.589081 4900 scope.go:117] "RemoveContainer" containerID="05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.589541 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c"} err="failed to get container status \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": rpc error: code = NotFound desc = could not find container \"05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c\": container with ID starting with 05360cd7d34ebddd013a0f78317b851494dac6bfb94d52517d9b94208b69b84c not found: ID does not exist" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.589568 4900 scope.go:117] "RemoveContainer" containerID="c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d" Dec 02 13:55:17 crc kubenswrapper[4900]: I1202 13:55:17.590763 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d"} err="failed to get container status \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": rpc error: code = NotFound desc = could not find container \"c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d\": container with ID starting with c86eb0309e799a065527e931f4325dbbfb1bf9fc25e2a1dfd752ee1723ae368d not found: ID does not exist" Dec 02 13:55:18 crc kubenswrapper[4900]: I1202 13:55:18.298699 4900 generic.go:334] "Generic (PLEG): container finished" podID="c56ff65d-37ed-4b68-8b79-a69b7adc5c05" containerID="938f052018ff0f1bf84580ce6992a31f46971ff1276686e1f2359e655d3c5fab" exitCode=0 Dec 02 13:55:18 crc kubenswrapper[4900]: I1202 13:55:18.298803 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerDied","Data":"938f052018ff0f1bf84580ce6992a31f46971ff1276686e1f2359e655d3c5fab"} Dec 02 13:55:18 crc kubenswrapper[4900]: I1202 13:55:18.300633 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"c14560d8d434de004fc71d85db37044820e4f708a1eaab15de78b454ab59b4a0"} Dec 02 13:55:18 crc kubenswrapper[4900]: I1202 13:55:18.306857 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r8pv9_7cacd7d0-a1a1-4ea0-b918-a73c8220e500/kube-multus/2.log" Dec 02 13:55:18 crc kubenswrapper[4900]: I1202 13:55:18.307118 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r8pv9" event={"ID":"7cacd7d0-a1a1-4ea0-b918-a73c8220e500","Type":"ContainerStarted","Data":"90765e162f8b4a44582f38eb3e59de46fa01059cc97ca3dc86e9fad5e856a170"} Dec 02 13:55:18 crc kubenswrapper[4900]: I1202 13:55:18.919579 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338f7f04-2450-4efb-a2e7-3c0e13eb8998" path="/var/lib/kubelet/pods/338f7f04-2450-4efb-a2e7-3c0e13eb8998/volumes" Dec 02 13:55:19 crc kubenswrapper[4900]: I1202 13:55:19.324171 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"7652f52ebccba6e18ba95d64b7c0a98def3d5737969d0349651c20df0b86dc8a"} Dec 02 13:55:19 crc kubenswrapper[4900]: I1202 13:55:19.325267 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"e133c663d458eb15764d08608e62a62bf8d1abcc1371355e1631b150bc12d636"} Dec 02 13:55:19 crc kubenswrapper[4900]: I1202 13:55:19.325425 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"42c394f92d178d4b6d595fe98363422d85178721f2d710a42114581fed5c4dc0"} Dec 02 13:55:19 crc kubenswrapper[4900]: I1202 13:55:19.325544 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"91f2b23be4e519793136f1e7008e63ef6881cbf4eb425c3f4cc4648107337059"} Dec 02 13:55:19 crc kubenswrapper[4900]: I1202 13:55:19.325736 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"4394debdf7751497859186bf6625d868b73a7eaa0616d597bcdf24a1101f1927"} Dec 02 13:55:20 crc kubenswrapper[4900]: I1202 13:55:20.342047 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"af4def08464ea76e0732784e8c8d2edaebbd46ac2fa6b0c7b195ec1f81b5c4a7"} Dec 02 13:55:22 crc kubenswrapper[4900]: I1202 13:55:22.364047 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"9c396091ed6fbb064803ee2e5080f0ed42b9bd09f8a85e497fd1f9e22089b2ac"} Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.396299 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" event={"ID":"c56ff65d-37ed-4b68-8b79-a69b7adc5c05","Type":"ContainerStarted","Data":"b68595bd6880720cfd306d58ee76275f9fb7997f75d8de04d0c28dc79f5c1f0c"} Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.398158 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.398327 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.398461 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.448370 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.454927 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" podStartSLOduration=8.454909391 podStartE2EDuration="8.454909391s" podCreationTimestamp="2025-12-02 13:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:55:25.443634457 +0000 UTC m=+770.859448348" watchObservedRunningTime="2025-12-02 13:55:25.454909391 +0000 UTC m=+770.870723252" Dec 02 13:55:25 crc kubenswrapper[4900]: I1202 13:55:25.457161 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.288314 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-lrx7m"] Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.291306 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.294129 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.294619 4900 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-p768c" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.294711 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.294948 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.310313 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lrx7m"] Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.437036 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhpn\" (UniqueName: \"kubernetes.io/projected/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-kube-api-access-hbhpn\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.437165 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-crc-storage\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.437256 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-node-mnt\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.538427 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhpn\" (UniqueName: \"kubernetes.io/projected/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-kube-api-access-hbhpn\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.538954 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-crc-storage\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.539039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-node-mnt\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.539738 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-node-mnt\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.540156 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-crc-storage\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.562569 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhpn\" (UniqueName: \"kubernetes.io/projected/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-kube-api-access-hbhpn\") pod \"crc-storage-crc-lrx7m\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: I1202 13:55:27.754480 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: E1202 13:55:27.801582 4900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lrx7m_crc-storage_1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae_0(a7ec902d9f0463aad3ce66fc214f82d9119a02e7db0b0c811527ee10bfb6c0d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:55:27 crc kubenswrapper[4900]: E1202 13:55:27.801751 4900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lrx7m_crc-storage_1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae_0(a7ec902d9f0463aad3ce66fc214f82d9119a02e7db0b0c811527ee10bfb6c0d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: E1202 13:55:27.801806 4900 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lrx7m_crc-storage_1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae_0(a7ec902d9f0463aad3ce66fc214f82d9119a02e7db0b0c811527ee10bfb6c0d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:27 crc kubenswrapper[4900]: E1202 13:55:27.801916 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-lrx7m_crc-storage(1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-lrx7m_crc-storage(1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lrx7m_crc-storage_1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae_0(a7ec902d9f0463aad3ce66fc214f82d9119a02e7db0b0c811527ee10bfb6c0d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-lrx7m" podUID="1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" Dec 02 13:55:28 crc kubenswrapper[4900]: I1202 13:55:28.418925 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:28 crc kubenswrapper[4900]: I1202 13:55:28.420411 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:28 crc kubenswrapper[4900]: I1202 13:55:28.960662 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lrx7m"] Dec 02 13:55:28 crc kubenswrapper[4900]: I1202 13:55:28.970337 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 13:55:29 crc kubenswrapper[4900]: I1202 13:55:29.428709 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lrx7m" event={"ID":"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae","Type":"ContainerStarted","Data":"459918115ec53f69ce4135dc9302a965820a47ae52dedbe5f0d1a20bdefda997"} Dec 02 13:55:31 crc kubenswrapper[4900]: I1202 13:55:31.446487 4900 generic.go:334] "Generic (PLEG): container finished" podID="1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" containerID="d0f05bd33c91e566b3f47f46beb63ce3a10fb9bb00368aab9222e9f18b8c846b" exitCode=0 Dec 02 13:55:31 crc kubenswrapper[4900]: I1202 13:55:31.446567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lrx7m" event={"ID":"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae","Type":"ContainerDied","Data":"d0f05bd33c91e566b3f47f46beb63ce3a10fb9bb00368aab9222e9f18b8c846b"} Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.816671 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.931287 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhpn\" (UniqueName: \"kubernetes.io/projected/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-kube-api-access-hbhpn\") pod \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.931418 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-crc-storage\") pod \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.931519 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-node-mnt\") pod \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\" (UID: \"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae\") " Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.931724 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" (UID: "1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.931814 4900 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.939142 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-kube-api-access-hbhpn" (OuterVolumeSpecName: "kube-api-access-hbhpn") pod "1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" (UID: "1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae"). InnerVolumeSpecName "kube-api-access-hbhpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:55:32 crc kubenswrapper[4900]: I1202 13:55:32.950476 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" (UID: "1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:55:33 crc kubenswrapper[4900]: I1202 13:55:33.033296 4900 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:33 crc kubenswrapper[4900]: I1202 13:55:33.033342 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhpn\" (UniqueName: \"kubernetes.io/projected/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae-kube-api-access-hbhpn\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:33 crc kubenswrapper[4900]: I1202 13:55:33.461211 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lrx7m" event={"ID":"1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae","Type":"ContainerDied","Data":"459918115ec53f69ce4135dc9302a965820a47ae52dedbe5f0d1a20bdefda997"} Dec 02 13:55:33 crc kubenswrapper[4900]: I1202 13:55:33.461276 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459918115ec53f69ce4135dc9302a965820a47ae52dedbe5f0d1a20bdefda997" Dec 02 13:55:33 crc kubenswrapper[4900]: I1202 13:55:33.461338 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lrx7m" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.410686 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl"] Dec 02 13:55:41 crc kubenswrapper[4900]: E1202 13:55:41.412222 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" containerName="storage" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.412326 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" containerName="storage" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.412526 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" containerName="storage" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.413429 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.416331 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.420497 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl"] Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.559925 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.560245 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.560348 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdpz\" (UniqueName: \"kubernetes.io/projected/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-kube-api-access-wzdpz\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.661536 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.661725 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.661781 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdpz\" (UniqueName: \"kubernetes.io/projected/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-kube-api-access-wzdpz\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.662729 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.662860 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.696296 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdpz\" (UniqueName: \"kubernetes.io/projected/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-kube-api-access-wzdpz\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:41 crc kubenswrapper[4900]: I1202 13:55:41.747744 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:42 crc kubenswrapper[4900]: I1202 13:55:42.024949 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl"] Dec 02 13:55:42 crc kubenswrapper[4900]: W1202 13:55:42.030901 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceb9bfc0_6b89_467f_b74a_678be8a2df0c.slice/crio-c1e6e2f37fb496b3d1f5cd7269b9907747749ac402cadb971f1a2c27c42d98c3 WatchSource:0}: Error finding container c1e6e2f37fb496b3d1f5cd7269b9907747749ac402cadb971f1a2c27c42d98c3: Status 404 returned error can't find the container with id c1e6e2f37fb496b3d1f5cd7269b9907747749ac402cadb971f1a2c27c42d98c3 Dec 02 13:55:42 crc kubenswrapper[4900]: I1202 13:55:42.525600 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" event={"ID":"ceb9bfc0-6b89-467f-b74a-678be8a2df0c","Type":"ContainerStarted","Data":"c87326ab61237d0a4c2cf2cc52ce55eb350c8bdba178adc76253ac1a848da391"} Dec 02 13:55:42 crc kubenswrapper[4900]: I1202 13:55:42.525690 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" event={"ID":"ceb9bfc0-6b89-467f-b74a-678be8a2df0c","Type":"ContainerStarted","Data":"c1e6e2f37fb496b3d1f5cd7269b9907747749ac402cadb971f1a2c27c42d98c3"} Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.529483 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lqqsq"] Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.531220 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.536771 4900 generic.go:334] "Generic (PLEG): container finished" podID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerID="c87326ab61237d0a4c2cf2cc52ce55eb350c8bdba178adc76253ac1a848da391" exitCode=0 Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.536859 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" event={"ID":"ceb9bfc0-6b89-467f-b74a-678be8a2df0c","Type":"ContainerDied","Data":"c87326ab61237d0a4c2cf2cc52ce55eb350c8bdba178adc76253ac1a848da391"} Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.552522 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqqsq"] Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.594111 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-utilities\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.594231 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-catalog-content\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.594297 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptfh\" (UniqueName: \"kubernetes.io/projected/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-kube-api-access-nptfh\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.695696 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-utilities\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.695789 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-catalog-content\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.695842 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nptfh\" (UniqueName: \"kubernetes.io/projected/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-kube-api-access-nptfh\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.697094 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-utilities\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.697162 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-catalog-content\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.715306 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nptfh\" (UniqueName: \"kubernetes.io/projected/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-kube-api-access-nptfh\") pod \"redhat-operators-lqqsq\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:43 crc kubenswrapper[4900]: I1202 13:55:43.865848 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:44 crc kubenswrapper[4900]: I1202 13:55:44.317608 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqqsq"] Dec 02 13:55:44 crc kubenswrapper[4900]: W1202 13:55:44.326387 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde6cbbec_5830_4dd7_ab9e_9672e973ea6f.slice/crio-846e17aa00c667bc27d9cd59770f13d5aadf6925c1934ba92a8a71b77ae77c28 WatchSource:0}: Error finding container 846e17aa00c667bc27d9cd59770f13d5aadf6925c1934ba92a8a71b77ae77c28: Status 404 returned error can't find the container with id 846e17aa00c667bc27d9cd59770f13d5aadf6925c1934ba92a8a71b77ae77c28 Dec 02 13:55:44 crc kubenswrapper[4900]: I1202 13:55:44.543750 4900 generic.go:334] "Generic (PLEG): container finished" podID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerID="bc45dfdeb92e6cc830e907f5805cdd60a8209a09a7d691768a958e1f5118e316" exitCode=0 Dec 02 13:55:44 crc kubenswrapper[4900]: I1202 13:55:44.543817 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqqsq" event={"ID":"de6cbbec-5830-4dd7-ab9e-9672e973ea6f","Type":"ContainerDied","Data":"bc45dfdeb92e6cc830e907f5805cdd60a8209a09a7d691768a958e1f5118e316"} Dec 02 13:55:44 crc kubenswrapper[4900]: I1202 13:55:44.543862 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqqsq" event={"ID":"de6cbbec-5830-4dd7-ab9e-9672e973ea6f","Type":"ContainerStarted","Data":"846e17aa00c667bc27d9cd59770f13d5aadf6925c1934ba92a8a71b77ae77c28"} Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.116976 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.117691 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.117865 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.118801 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a09a1c5505ad87f53094560a2e55e53cf8b4e88f885e7ed1b8c3af7bddb65c71"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.119039 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://a09a1c5505ad87f53094560a2e55e53cf8b4e88f885e7ed1b8c3af7bddb65c71" gracePeriod=600 Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.556065 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="a09a1c5505ad87f53094560a2e55e53cf8b4e88f885e7ed1b8c3af7bddb65c71" exitCode=0 Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.556134 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"a09a1c5505ad87f53094560a2e55e53cf8b4e88f885e7ed1b8c3af7bddb65c71"} Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.556200 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"96fc286beb52d1fa09b32c5aa1607bec1c64d198ad304a0191d978063a0b9ab5"} Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.556233 4900 scope.go:117] "RemoveContainer" containerID="d6a7de400caf117429d90260321e7369a106edc882963bd6b93427292ee894ba" Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.561902 4900 generic.go:334] "Generic (PLEG): container finished" podID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerID="8f56550215b78821b2dc6f029e9cebab79a0e0751a6297624c3824d253e69d1d" exitCode=0 Dec 02 13:55:45 crc kubenswrapper[4900]: I1202 13:55:45.561993 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" event={"ID":"ceb9bfc0-6b89-467f-b74a-678be8a2df0c","Type":"ContainerDied","Data":"8f56550215b78821b2dc6f029e9cebab79a0e0751a6297624c3824d253e69d1d"} Dec 02 13:55:46 crc kubenswrapper[4900]: I1202 13:55:46.575292 4900 generic.go:334] "Generic (PLEG): container finished" podID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerID="4713836fbfe4aca0930669b671cb8d812e17a1801671e5216322ae722fb2d6fb" exitCode=0 Dec 02 13:55:46 crc kubenswrapper[4900]: I1202 13:55:46.575461 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" event={"ID":"ceb9bfc0-6b89-467f-b74a-678be8a2df0c","Type":"ContainerDied","Data":"4713836fbfe4aca0930669b671cb8d812e17a1801671e5216322ae722fb2d6fb"} Dec 02 13:55:46 crc kubenswrapper[4900]: I1202 13:55:46.579481 4900 generic.go:334] "Generic (PLEG): container finished" podID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerID="8a4e601cddfbe30b38cbaefd0749cf9207bca1b57d7d314e1274f1b5ad2f4e9d" exitCode=0 Dec 02 13:55:46 crc kubenswrapper[4900]: I1202 13:55:46.579568 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqqsq" event={"ID":"de6cbbec-5830-4dd7-ab9e-9672e973ea6f","Type":"ContainerDied","Data":"8a4e601cddfbe30b38cbaefd0749cf9207bca1b57d7d314e1274f1b5ad2f4e9d"} Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.561213 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkzk" Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.590133 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqqsq" event={"ID":"de6cbbec-5830-4dd7-ab9e-9672e973ea6f","Type":"ContainerStarted","Data":"4636678d355262c35ed9fbcb427b1910197184dbfcb45969990dd2452a415ab5"} Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.647989 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lqqsq" podStartSLOduration=1.9512317179999998 podStartE2EDuration="4.64796019s" podCreationTimestamp="2025-12-02 13:55:43 +0000 UTC" firstStartedPulling="2025-12-02 13:55:44.546003191 +0000 UTC m=+789.961817062" lastFinishedPulling="2025-12-02 13:55:47.242731653 +0000 UTC m=+792.658545534" observedRunningTime="2025-12-02 13:55:47.643350151 +0000 UTC m=+793.059164012" watchObservedRunningTime="2025-12-02 13:55:47.64796019 +0000 UTC m=+793.063774071" Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.872146 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.960851 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-bundle\") pod \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.960940 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdpz\" (UniqueName: \"kubernetes.io/projected/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-kube-api-access-wzdpz\") pod \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.961094 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-util\") pod \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\" (UID: \"ceb9bfc0-6b89-467f-b74a-678be8a2df0c\") " Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.963356 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-bundle" (OuterVolumeSpecName: "bundle") pod "ceb9bfc0-6b89-467f-b74a-678be8a2df0c" (UID: "ceb9bfc0-6b89-467f-b74a-678be8a2df0c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.970274 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-kube-api-access-wzdpz" (OuterVolumeSpecName: "kube-api-access-wzdpz") pod "ceb9bfc0-6b89-467f-b74a-678be8a2df0c" (UID: "ceb9bfc0-6b89-467f-b74a-678be8a2df0c"). InnerVolumeSpecName "kube-api-access-wzdpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:55:47 crc kubenswrapper[4900]: I1202 13:55:47.988352 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-util" (OuterVolumeSpecName: "util") pod "ceb9bfc0-6b89-467f-b74a-678be8a2df0c" (UID: "ceb9bfc0-6b89-467f-b74a-678be8a2df0c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:55:48 crc kubenswrapper[4900]: I1202 13:55:48.064773 4900 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-util\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:48 crc kubenswrapper[4900]: I1202 13:55:48.064865 4900 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:48 crc kubenswrapper[4900]: I1202 13:55:48.064893 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzdpz\" (UniqueName: \"kubernetes.io/projected/ceb9bfc0-6b89-467f-b74a-678be8a2df0c-kube-api-access-wzdpz\") on node \"crc\" DevicePath \"\"" Dec 02 13:55:48 crc kubenswrapper[4900]: I1202 13:55:48.598989 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" Dec 02 13:55:48 crc kubenswrapper[4900]: I1202 13:55:48.598979 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl" event={"ID":"ceb9bfc0-6b89-467f-b74a-678be8a2df0c","Type":"ContainerDied","Data":"c1e6e2f37fb496b3d1f5cd7269b9907747749ac402cadb971f1a2c27c42d98c3"} Dec 02 13:55:48 crc kubenswrapper[4900]: I1202 13:55:48.599060 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e6e2f37fb496b3d1f5cd7269b9907747749ac402cadb971f1a2c27c42d98c3" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.960598 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m"] Dec 02 13:55:51 crc kubenswrapper[4900]: E1202 13:55:51.961040 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="pull" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.961053 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="pull" Dec 02 13:55:51 crc kubenswrapper[4900]: E1202 13:55:51.961075 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="util" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.961081 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="util" Dec 02 13:55:51 crc kubenswrapper[4900]: E1202 13:55:51.961091 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="extract" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.961098 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="extract" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.961188 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb9bfc0-6b89-467f-b74a-678be8a2df0c" containerName="extract" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.961540 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.965302 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.965380 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-5tz46" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.966011 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 02 13:55:51 crc kubenswrapper[4900]: I1202 13:55:51.974933 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m"] Dec 02 13:55:52 crc kubenswrapper[4900]: I1202 13:55:52.025533 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5zv\" (UniqueName: \"kubernetes.io/projected/7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66-kube-api-access-qt5zv\") pod \"nmstate-operator-5b5b58f5c8-2jf4m\" (UID: \"7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" Dec 02 13:55:52 crc kubenswrapper[4900]: I1202 13:55:52.127180 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5zv\" (UniqueName: \"kubernetes.io/projected/7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66-kube-api-access-qt5zv\") pod \"nmstate-operator-5b5b58f5c8-2jf4m\" (UID: \"7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" Dec 02 13:55:52 crc kubenswrapper[4900]: I1202 13:55:52.153612 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5zv\" (UniqueName: \"kubernetes.io/projected/7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66-kube-api-access-qt5zv\") pod \"nmstate-operator-5b5b58f5c8-2jf4m\" (UID: \"7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" Dec 02 13:55:52 crc kubenswrapper[4900]: I1202 13:55:52.279057 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" Dec 02 13:55:52 crc kubenswrapper[4900]: I1202 13:55:52.605892 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m"] Dec 02 13:55:52 crc kubenswrapper[4900]: I1202 13:55:52.629220 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" event={"ID":"7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66","Type":"ContainerStarted","Data":"c438e70be468e7f876e8174dfbaaf8c7981c3566bb6bcab446a187f3673f5467"} Dec 02 13:55:53 crc kubenswrapper[4900]: I1202 13:55:53.866381 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:53 crc kubenswrapper[4900]: I1202 13:55:53.867941 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:55:54 crc kubenswrapper[4900]: I1202 13:55:54.930659 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lqqsq" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="registry-server" probeResult="failure" output=< Dec 02 13:55:54 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 13:55:54 crc kubenswrapper[4900]: > Dec 02 13:55:57 crc kubenswrapper[4900]: I1202 13:55:57.670360 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" event={"ID":"7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66","Type":"ContainerStarted","Data":"d55242b7d11a0fde6968d9ab043e812915e362ea1e24e4d85e5432b5df19534a"} Dec 02 13:55:57 crc kubenswrapper[4900]: I1202 13:55:57.703301 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-2jf4m" podStartSLOduration=2.884267767 podStartE2EDuration="6.703272546s" podCreationTimestamp="2025-12-02 13:55:51 +0000 UTC" firstStartedPulling="2025-12-02 13:55:52.616009312 +0000 UTC m=+798.031823163" lastFinishedPulling="2025-12-02 13:55:56.435014051 +0000 UTC m=+801.850827942" observedRunningTime="2025-12-02 13:55:57.69946472 +0000 UTC m=+803.115278601" watchObservedRunningTime="2025-12-02 13:55:57.703272546 +0000 UTC m=+803.119086427" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.205837 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.210127 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.210311 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.211115 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.212777 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dx8j7" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.214253 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.241698 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.249111 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.260696 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cvpcz"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.262187 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.302195 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4aad5874-85e1-463d-aa8b-7736a7f36be6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rgps6\" (UID: \"4aad5874-85e1-463d-aa8b-7736a7f36be6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.302731 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82ms\" (UniqueName: \"kubernetes.io/projected/4aad5874-85e1-463d-aa8b-7736a7f36be6-kube-api-access-m82ms\") pod \"nmstate-webhook-5f6d4c5ccb-rgps6\" (UID: \"4aad5874-85e1-463d-aa8b-7736a7f36be6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.302888 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/8382e72b-9452-45c7-92bd-dbdf8cca9706-kube-api-access-jjhtg\") pod \"nmstate-metrics-7f946cbc9-dw7xc\" (UID: \"8382e72b-9452-45c7-92bd-dbdf8cca9706\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.344440 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.345435 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.350759 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.350786 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zg6pl" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.350817 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.367418 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.403838 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82ms\" (UniqueName: \"kubernetes.io/projected/4aad5874-85e1-463d-aa8b-7736a7f36be6-kube-api-access-m82ms\") pod \"nmstate-webhook-5f6d4c5ccb-rgps6\" (UID: \"4aad5874-85e1-463d-aa8b-7736a7f36be6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.403898 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5578f939-25d2-48da-8999-c26293a16f46-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.403939 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/8382e72b-9452-45c7-92bd-dbdf8cca9706-kube-api-access-jjhtg\") pod \"nmstate-metrics-7f946cbc9-dw7xc\" (UID: \"8382e72b-9452-45c7-92bd-dbdf8cca9706\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.403968 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4aad5874-85e1-463d-aa8b-7736a7f36be6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rgps6\" (UID: \"4aad5874-85e1-463d-aa8b-7736a7f36be6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.403991 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5578f939-25d2-48da-8999-c26293a16f46-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.404012 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-nmstate-lock\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.404039 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbkls\" (UniqueName: \"kubernetes.io/projected/150c147d-317b-48e4-a057-da44c031d144-kube-api-access-wbkls\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.404058 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rm8\" (UniqueName: \"kubernetes.io/projected/5578f939-25d2-48da-8999-c26293a16f46-kube-api-access-m8rm8\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.404089 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-ovs-socket\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.404244 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-dbus-socket\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.410713 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4aad5874-85e1-463d-aa8b-7736a7f36be6-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rgps6\" (UID: \"4aad5874-85e1-463d-aa8b-7736a7f36be6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.419836 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhtg\" (UniqueName: \"kubernetes.io/projected/8382e72b-9452-45c7-92bd-dbdf8cca9706-kube-api-access-jjhtg\") pod \"nmstate-metrics-7f946cbc9-dw7xc\" (UID: \"8382e72b-9452-45c7-92bd-dbdf8cca9706\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.420215 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82ms\" (UniqueName: \"kubernetes.io/projected/4aad5874-85e1-463d-aa8b-7736a7f36be6-kube-api-access-m82ms\") pod \"nmstate-webhook-5f6d4c5ccb-rgps6\" (UID: \"4aad5874-85e1-463d-aa8b-7736a7f36be6\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505240 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5578f939-25d2-48da-8999-c26293a16f46-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505301 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5578f939-25d2-48da-8999-c26293a16f46-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505326 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-nmstate-lock\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbkls\" (UniqueName: \"kubernetes.io/projected/150c147d-317b-48e4-a057-da44c031d144-kube-api-access-wbkls\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505371 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rm8\" (UniqueName: \"kubernetes.io/projected/5578f939-25d2-48da-8999-c26293a16f46-kube-api-access-m8rm8\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505391 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-ovs-socket\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.505412 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-dbus-socket\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.510770 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-nmstate-lock\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: E1202 13:56:02.510907 4900 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 02 13:56:02 crc kubenswrapper[4900]: E1202 13:56:02.510996 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5578f939-25d2-48da-8999-c26293a16f46-plugin-serving-cert podName:5578f939-25d2-48da-8999-c26293a16f46 nodeName:}" failed. No retries permitted until 2025-12-02 13:56:03.010976571 +0000 UTC m=+808.426790422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5578f939-25d2-48da-8999-c26293a16f46-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-95z72" (UID: "5578f939-25d2-48da-8999-c26293a16f46") : secret "plugin-serving-cert" not found Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.511136 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-dbus-socket\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.523765 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/150c147d-317b-48e4-a057-da44c031d144-ovs-socket\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.526839 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5578f939-25d2-48da-8999-c26293a16f46-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.538014 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.542034 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rm8\" (UniqueName: \"kubernetes.io/projected/5578f939-25d2-48da-8999-c26293a16f46-kube-api-access-m8rm8\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.557671 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.570434 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbkls\" (UniqueName: \"kubernetes.io/projected/150c147d-317b-48e4-a057-da44c031d144-kube-api-access-wbkls\") pod \"nmstate-handler-cvpcz\" (UID: \"150c147d-317b-48e4-a057-da44c031d144\") " pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.589953 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.641214 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74f65588b4-7rl4c"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.642496 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.657201 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74f65588b4-7rl4c"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712412 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmw8b\" (UniqueName: \"kubernetes.io/projected/18ed9ec3-881a-48a5-8849-98c93ec38f54-kube-api-access-vmw8b\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712453 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-oauth-config\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712485 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-service-ca\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712567 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-trusted-ca-bundle\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712587 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-oauth-serving-cert\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712616 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-serving-cert\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.712674 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-config\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.714469 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cvpcz" event={"ID":"150c147d-317b-48e4-a057-da44c031d144","Type":"ContainerStarted","Data":"b34c34257d5e0adfc528e1c72f7bc90aed4223e8e2f3ff296108e87cf06b3eb8"} Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814354 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-trusted-ca-bundle\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814398 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-oauth-serving-cert\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814422 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-serving-cert\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-config\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814475 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmw8b\" (UniqueName: \"kubernetes.io/projected/18ed9ec3-881a-48a5-8849-98c93ec38f54-kube-api-access-vmw8b\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814490 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-oauth-config\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.814511 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-service-ca\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.816150 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-oauth-serving-cert\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.816562 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-service-ca\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.816713 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-config\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.816899 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18ed9ec3-881a-48a5-8849-98c93ec38f54-trusted-ca-bundle\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.825390 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-serving-cert\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.825796 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18ed9ec3-881a-48a5-8849-98c93ec38f54-console-oauth-config\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.832876 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmw8b\" (UniqueName: \"kubernetes.io/projected/18ed9ec3-881a-48a5-8849-98c93ec38f54-kube-api-access-vmw8b\") pod \"console-74f65588b4-7rl4c\" (UID: \"18ed9ec3-881a-48a5-8849-98c93ec38f54\") " pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:02 crc kubenswrapper[4900]: W1202 13:56:02.837673 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aad5874_85e1_463d_aa8b_7736a7f36be6.slice/crio-635644016d24f438ce38a045ded088ef62817e1abd5b1e15d5fd8d01a1a43073 WatchSource:0}: Error finding container 635644016d24f438ce38a045ded088ef62817e1abd5b1e15d5fd8d01a1a43073: Status 404 returned error can't find the container with id 635644016d24f438ce38a045ded088ef62817e1abd5b1e15d5fd8d01a1a43073 Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.837919 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6"] Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.875266 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc"] Dec 02 13:56:02 crc kubenswrapper[4900]: W1202 13:56:02.882860 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8382e72b_9452_45c7_92bd_dbdf8cca9706.slice/crio-3268a08af2b65a1874aa6878b008cb607f7f246573a7d528a7c3474b641712c3 WatchSource:0}: Error finding container 3268a08af2b65a1874aa6878b008cb607f7f246573a7d528a7c3474b641712c3: Status 404 returned error can't find the container with id 3268a08af2b65a1874aa6878b008cb607f7f246573a7d528a7c3474b641712c3 Dec 02 13:56:02 crc kubenswrapper[4900]: I1202 13:56:02.974953 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.016658 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5578f939-25d2-48da-8999-c26293a16f46-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.020794 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5578f939-25d2-48da-8999-c26293a16f46-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-95z72\" (UID: \"5578f939-25d2-48da-8999-c26293a16f46\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.260898 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.495712 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74f65588b4-7rl4c"] Dec 02 13:56:03 crc kubenswrapper[4900]: W1202 13:56:03.510890 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ed9ec3_881a_48a5_8849_98c93ec38f54.slice/crio-752825b846e69fec1490139fe4f78bf6c244ca33895f293b5465866935fff26d WatchSource:0}: Error finding container 752825b846e69fec1490139fe4f78bf6c244ca33895f293b5465866935fff26d: Status 404 returned error can't find the container with id 752825b846e69fec1490139fe4f78bf6c244ca33895f293b5465866935fff26d Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.548576 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72"] Dec 02 13:56:03 crc kubenswrapper[4900]: W1202 13:56:03.566624 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5578f939_25d2_48da_8999_c26293a16f46.slice/crio-18fc5280806dc9c93012bf734f2fc4e0674ae1d86821ebeffe5cb20a7b62d185 WatchSource:0}: Error finding container 18fc5280806dc9c93012bf734f2fc4e0674ae1d86821ebeffe5cb20a7b62d185: Status 404 returned error can't find the container with id 18fc5280806dc9c93012bf734f2fc4e0674ae1d86821ebeffe5cb20a7b62d185 Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.726663 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" event={"ID":"4aad5874-85e1-463d-aa8b-7736a7f36be6","Type":"ContainerStarted","Data":"635644016d24f438ce38a045ded088ef62817e1abd5b1e15d5fd8d01a1a43073"} Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.728423 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" event={"ID":"5578f939-25d2-48da-8999-c26293a16f46","Type":"ContainerStarted","Data":"18fc5280806dc9c93012bf734f2fc4e0674ae1d86821ebeffe5cb20a7b62d185"} Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.729862 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74f65588b4-7rl4c" event={"ID":"18ed9ec3-881a-48a5-8849-98c93ec38f54","Type":"ContainerStarted","Data":"752825b846e69fec1490139fe4f78bf6c244ca33895f293b5465866935fff26d"} Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.731439 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" event={"ID":"8382e72b-9452-45c7-92bd-dbdf8cca9706","Type":"ContainerStarted","Data":"3268a08af2b65a1874aa6878b008cb607f7f246573a7d528a7c3474b641712c3"} Dec 02 13:56:03 crc kubenswrapper[4900]: I1202 13:56:03.950017 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:56:04 crc kubenswrapper[4900]: I1202 13:56:04.027664 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:56:04 crc kubenswrapper[4900]: I1202 13:56:04.745050 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74f65588b4-7rl4c" event={"ID":"18ed9ec3-881a-48a5-8849-98c93ec38f54","Type":"ContainerStarted","Data":"8669728904cbf6cac0c4512b5a46dfa06383421596aef467716d24c75cf38542"} Dec 02 13:56:04 crc kubenswrapper[4900]: I1202 13:56:04.774231 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74f65588b4-7rl4c" podStartSLOduration=2.774203558 podStartE2EDuration="2.774203558s" podCreationTimestamp="2025-12-02 13:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:56:04.772429739 +0000 UTC m=+810.188243630" watchObservedRunningTime="2025-12-02 13:56:04.774203558 +0000 UTC m=+810.190017409" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.321176 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqqsq"] Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.321549 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lqqsq" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="registry-server" containerID="cri-o://4636678d355262c35ed9fbcb427b1910197184dbfcb45969990dd2452a415ab5" gracePeriod=2 Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.764311 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" event={"ID":"5578f939-25d2-48da-8999-c26293a16f46","Type":"ContainerStarted","Data":"9964fed034cd0bc84094ad01dde6e786267e35c81c8c2132648d908b02c631e5"} Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.768758 4900 generic.go:334] "Generic (PLEG): container finished" podID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerID="4636678d355262c35ed9fbcb427b1910197184dbfcb45969990dd2452a415ab5" exitCode=0 Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.768811 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqqsq" event={"ID":"de6cbbec-5830-4dd7-ab9e-9672e973ea6f","Type":"ContainerDied","Data":"4636678d355262c35ed9fbcb427b1910197184dbfcb45969990dd2452a415ab5"} Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.775195 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" event={"ID":"8382e72b-9452-45c7-92bd-dbdf8cca9706","Type":"ContainerStarted","Data":"cd938bfcf7f226631a64feabeb1ba01b9817e693165a4a39810db5e0c7d38daf"} Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.778091 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" event={"ID":"4aad5874-85e1-463d-aa8b-7736a7f36be6","Type":"ContainerStarted","Data":"22b1d070b244754acf64e897bf3637b90e0357327729e47a0d8dcc39141d9064"} Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.778885 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.784208 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-95z72" podStartSLOduration=2.023520719 podStartE2EDuration="4.784192641s" podCreationTimestamp="2025-12-02 13:56:02 +0000 UTC" firstStartedPulling="2025-12-02 13:56:03.569795002 +0000 UTC m=+808.985608863" lastFinishedPulling="2025-12-02 13:56:06.330466894 +0000 UTC m=+811.746280785" observedRunningTime="2025-12-02 13:56:06.783414429 +0000 UTC m=+812.199228320" watchObservedRunningTime="2025-12-02 13:56:06.784192641 +0000 UTC m=+812.200006492" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.795806 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.805926 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" podStartSLOduration=1.269556231 podStartE2EDuration="4.805916196s" podCreationTimestamp="2025-12-02 13:56:02 +0000 UTC" firstStartedPulling="2025-12-02 13:56:02.840760147 +0000 UTC m=+808.256573998" lastFinishedPulling="2025-12-02 13:56:06.377120102 +0000 UTC m=+811.792933963" observedRunningTime="2025-12-02 13:56:06.799879078 +0000 UTC m=+812.215692919" watchObservedRunningTime="2025-12-02 13:56:06.805916196 +0000 UTC m=+812.221730047" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.884065 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-catalog-content\") pod \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.884129 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-utilities\") pod \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.884208 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nptfh\" (UniqueName: \"kubernetes.io/projected/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-kube-api-access-nptfh\") pod \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\" (UID: \"de6cbbec-5830-4dd7-ab9e-9672e973ea6f\") " Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.885463 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-utilities" (OuterVolumeSpecName: "utilities") pod "de6cbbec-5830-4dd7-ab9e-9672e973ea6f" (UID: "de6cbbec-5830-4dd7-ab9e-9672e973ea6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.889565 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-kube-api-access-nptfh" (OuterVolumeSpecName: "kube-api-access-nptfh") pod "de6cbbec-5830-4dd7-ab9e-9672e973ea6f" (UID: "de6cbbec-5830-4dd7-ab9e-9672e973ea6f"). InnerVolumeSpecName "kube-api-access-nptfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.986974 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:06 crc kubenswrapper[4900]: I1202 13:56:06.987385 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nptfh\" (UniqueName: \"kubernetes.io/projected/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-kube-api-access-nptfh\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.011055 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de6cbbec-5830-4dd7-ab9e-9672e973ea6f" (UID: "de6cbbec-5830-4dd7-ab9e-9672e973ea6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.089123 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6cbbec-5830-4dd7-ab9e-9672e973ea6f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.790230 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cvpcz" event={"ID":"150c147d-317b-48e4-a057-da44c031d144","Type":"ContainerStarted","Data":"44afbb801eee84b0b409eb93e220ba2ba716f5a004f34b5116858450d0edbbc8"} Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.792150 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.796788 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqqsq" event={"ID":"de6cbbec-5830-4dd7-ab9e-9672e973ea6f","Type":"ContainerDied","Data":"846e17aa00c667bc27d9cd59770f13d5aadf6925c1934ba92a8a71b77ae77c28"} Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.797246 4900 scope.go:117] "RemoveContainer" containerID="4636678d355262c35ed9fbcb427b1910197184dbfcb45969990dd2452a415ab5" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.797075 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqqsq" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.813999 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cvpcz" podStartSLOduration=2.089081825 podStartE2EDuration="5.813975043s" podCreationTimestamp="2025-12-02 13:56:02 +0000 UTC" firstStartedPulling="2025-12-02 13:56:02.683171867 +0000 UTC m=+808.098985718" lastFinishedPulling="2025-12-02 13:56:06.408065045 +0000 UTC m=+811.823878936" observedRunningTime="2025-12-02 13:56:07.812382529 +0000 UTC m=+813.228196410" watchObservedRunningTime="2025-12-02 13:56:07.813975043 +0000 UTC m=+813.229788924" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.848522 4900 scope.go:117] "RemoveContainer" containerID="8a4e601cddfbe30b38cbaefd0749cf9207bca1b57d7d314e1274f1b5ad2f4e9d" Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.848798 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqqsq"] Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.853695 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lqqsq"] Dec 02 13:56:07 crc kubenswrapper[4900]: I1202 13:56:07.876975 4900 scope.go:117] "RemoveContainer" containerID="bc45dfdeb92e6cc830e907f5805cdd60a8209a09a7d691768a958e1f5118e316" Dec 02 13:56:08 crc kubenswrapper[4900]: I1202 13:56:08.923706 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" path="/var/lib/kubelet/pods/de6cbbec-5830-4dd7-ab9e-9672e973ea6f/volumes" Dec 02 13:56:10 crc kubenswrapper[4900]: I1202 13:56:10.826230 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" event={"ID":"8382e72b-9452-45c7-92bd-dbdf8cca9706","Type":"ContainerStarted","Data":"236c407dea616cb6a91a4740e9ccb5f5a15c6a7c434dcce044f955f680510b03"} Dec 02 13:56:10 crc kubenswrapper[4900]: I1202 13:56:10.853809 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-dw7xc" podStartSLOduration=1.523828552 podStartE2EDuration="8.853784218s" podCreationTimestamp="2025-12-02 13:56:02 +0000 UTC" firstStartedPulling="2025-12-02 13:56:02.884946347 +0000 UTC m=+808.300760198" lastFinishedPulling="2025-12-02 13:56:10.214901983 +0000 UTC m=+815.630715864" observedRunningTime="2025-12-02 13:56:10.85314361 +0000 UTC m=+816.268957521" watchObservedRunningTime="2025-12-02 13:56:10.853784218 +0000 UTC m=+816.269598109" Dec 02 13:56:12 crc kubenswrapper[4900]: I1202 13:56:12.632722 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cvpcz" Dec 02 13:56:12 crc kubenswrapper[4900]: I1202 13:56:12.975926 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:12 crc kubenswrapper[4900]: I1202 13:56:12.975999 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:12 crc kubenswrapper[4900]: I1202 13:56:12.983445 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:13 crc kubenswrapper[4900]: I1202 13:56:13.857931 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74f65588b4-7rl4c" Dec 02 13:56:13 crc kubenswrapper[4900]: I1202 13:56:13.947236 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vrdh8"] Dec 02 13:56:22 crc kubenswrapper[4900]: I1202 13:56:22.566792 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rgps6" Dec 02 13:56:38 crc kubenswrapper[4900]: I1202 13:56:38.993211 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vrdh8" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" containerID="cri-o://d43e1770098954ec58f10770eaaddc859990a053bea0077fc33c91a8f2c38e12" gracePeriod=15 Dec 02 13:56:39 crc kubenswrapper[4900]: I1202 13:56:39.259764 4900 patch_prober.go:28] interesting pod/console-f9d7485db-vrdh8 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 02 13:56:39 crc kubenswrapper[4900]: I1202 13:56:39.259859 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-vrdh8" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.066139 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vrdh8_95616fe1-4979-433d-afce-3235d5dab8a5/console/0.log" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.066798 4900 generic.go:334] "Generic (PLEG): container finished" podID="95616fe1-4979-433d-afce-3235d5dab8a5" containerID="d43e1770098954ec58f10770eaaddc859990a053bea0077fc33c91a8f2c38e12" exitCode=2 Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.066847 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrdh8" event={"ID":"95616fe1-4979-433d-afce-3235d5dab8a5","Type":"ContainerDied","Data":"d43e1770098954ec58f10770eaaddc859990a053bea0077fc33c91a8f2c38e12"} Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.121165 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8"] Dec 02 13:56:40 crc kubenswrapper[4900]: E1202 13:56:40.121509 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="extract-content" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.121532 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="extract-content" Dec 02 13:56:40 crc kubenswrapper[4900]: E1202 13:56:40.121568 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="registry-server" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.121583 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="registry-server" Dec 02 13:56:40 crc kubenswrapper[4900]: E1202 13:56:40.121600 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="extract-utilities" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.121614 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="extract-utilities" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.121845 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6cbbec-5830-4dd7-ab9e-9672e973ea6f" containerName="registry-server" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.123309 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.126376 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.135342 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8"] Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.240296 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.240565 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxpsn\" (UniqueName: \"kubernetes.io/projected/8d928ba2-b289-4ede-96e5-a136771b99b1-kube-api-access-jxpsn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.240923 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.342324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.342394 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.342433 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxpsn\" (UniqueName: \"kubernetes.io/projected/8d928ba2-b289-4ede-96e5-a136771b99b1-kube-api-access-jxpsn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.343313 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.343321 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.371096 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxpsn\" (UniqueName: \"kubernetes.io/projected/8d928ba2-b289-4ede-96e5-a136771b99b1-kube-api-access-jxpsn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.407107 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vrdh8_95616fe1-4979-433d-afce-3235d5dab8a5/console/0.log" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.407220 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443123 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-console-config\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443263 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-serving-cert\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443308 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc8qv\" (UniqueName: \"kubernetes.io/projected/95616fe1-4979-433d-afce-3235d5dab8a5-kube-api-access-vc8qv\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443353 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-service-ca\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443397 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-oauth-config\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443454 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-oauth-serving-cert\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.443523 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-trusted-ca-bundle\") pod \"95616fe1-4979-433d-afce-3235d5dab8a5\" (UID: \"95616fe1-4979-433d-afce-3235d5dab8a5\") " Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.446546 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.451226 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-service-ca" (OuterVolumeSpecName: "service-ca") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.452389 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.453579 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-console-config" (OuterVolumeSpecName: "console-config") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.457401 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.489170 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.489468 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95616fe1-4979-433d-afce-3235d5dab8a5-kube-api-access-vc8qv" (OuterVolumeSpecName: "kube-api-access-vc8qv") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "kube-api-access-vc8qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.491239 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "95616fe1-4979-433d-afce-3235d5dab8a5" (UID: "95616fe1-4979-433d-afce-3235d5dab8a5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545302 4900 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545334 4900 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545347 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc8qv\" (UniqueName: \"kubernetes.io/projected/95616fe1-4979-433d-afce-3235d5dab8a5-kube-api-access-vc8qv\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545355 4900 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/95616fe1-4979-433d-afce-3235d5dab8a5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545364 4900 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545372 4900 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.545380 4900 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/95616fe1-4979-433d-afce-3235d5dab8a5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:40 crc kubenswrapper[4900]: I1202 13:56:40.901672 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8"] Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.079252 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vrdh8_95616fe1-4979-433d-afce-3235d5dab8a5/console/0.log" Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.079976 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrdh8" Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.079997 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrdh8" event={"ID":"95616fe1-4979-433d-afce-3235d5dab8a5","Type":"ContainerDied","Data":"991da6f2d96e49dfd3268b02811be54191698ccecea282287b381e77bfb0e7b2"} Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.080071 4900 scope.go:117] "RemoveContainer" containerID="d43e1770098954ec58f10770eaaddc859990a053bea0077fc33c91a8f2c38e12" Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.082366 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" event={"ID":"8d928ba2-b289-4ede-96e5-a136771b99b1","Type":"ContainerStarted","Data":"8ceda9549ecbfb44f5d5c496d4cb528fe5335bd47ce7147aaf5c05ab356cf7a2"} Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.106523 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vrdh8"] Dec 02 13:56:41 crc kubenswrapper[4900]: I1202 13:56:41.111736 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vrdh8"] Dec 02 13:56:42 crc kubenswrapper[4900]: I1202 13:56:42.093343 4900 generic.go:334] "Generic (PLEG): container finished" podID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerID="c3b72130b17f94f406ad777e56cbc5553fa44257f8011c686704a9d9885e4190" exitCode=0 Dec 02 13:56:42 crc kubenswrapper[4900]: I1202 13:56:42.093501 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" event={"ID":"8d928ba2-b289-4ede-96e5-a136771b99b1","Type":"ContainerDied","Data":"c3b72130b17f94f406ad777e56cbc5553fa44257f8011c686704a9d9885e4190"} Dec 02 13:56:42 crc kubenswrapper[4900]: I1202 13:56:42.922810 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" path="/var/lib/kubelet/pods/95616fe1-4979-433d-afce-3235d5dab8a5/volumes" Dec 02 13:56:44 crc kubenswrapper[4900]: I1202 13:56:44.119857 4900 generic.go:334] "Generic (PLEG): container finished" podID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerID="9ad6b163962690ca38cd8b2c043590c3b4f60f9c271379e76ba6bec8920a97da" exitCode=0 Dec 02 13:56:44 crc kubenswrapper[4900]: I1202 13:56:44.119993 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" event={"ID":"8d928ba2-b289-4ede-96e5-a136771b99b1","Type":"ContainerDied","Data":"9ad6b163962690ca38cd8b2c043590c3b4f60f9c271379e76ba6bec8920a97da"} Dec 02 13:56:45 crc kubenswrapper[4900]: I1202 13:56:45.132870 4900 generic.go:334] "Generic (PLEG): container finished" podID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerID="79b4bc7c38de3749c939f57401b9668bb29405ca629968644d5e9b2fee9580b9" exitCode=0 Dec 02 13:56:45 crc kubenswrapper[4900]: I1202 13:56:45.132951 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" event={"ID":"8d928ba2-b289-4ede-96e5-a136771b99b1","Type":"ContainerDied","Data":"79b4bc7c38de3749c939f57401b9668bb29405ca629968644d5e9b2fee9580b9"} Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.446707 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.547979 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxpsn\" (UniqueName: \"kubernetes.io/projected/8d928ba2-b289-4ede-96e5-a136771b99b1-kube-api-access-jxpsn\") pod \"8d928ba2-b289-4ede-96e5-a136771b99b1\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.557206 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d928ba2-b289-4ede-96e5-a136771b99b1-kube-api-access-jxpsn" (OuterVolumeSpecName: "kube-api-access-jxpsn") pod "8d928ba2-b289-4ede-96e5-a136771b99b1" (UID: "8d928ba2-b289-4ede-96e5-a136771b99b1"). InnerVolumeSpecName "kube-api-access-jxpsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.649423 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-bundle\") pod \"8d928ba2-b289-4ede-96e5-a136771b99b1\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.649484 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-util\") pod \"8d928ba2-b289-4ede-96e5-a136771b99b1\" (UID: \"8d928ba2-b289-4ede-96e5-a136771b99b1\") " Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.649970 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxpsn\" (UniqueName: \"kubernetes.io/projected/8d928ba2-b289-4ede-96e5-a136771b99b1-kube-api-access-jxpsn\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.651522 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-bundle" (OuterVolumeSpecName: "bundle") pod "8d928ba2-b289-4ede-96e5-a136771b99b1" (UID: "8d928ba2-b289-4ede-96e5-a136771b99b1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.732244 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-util" (OuterVolumeSpecName: "util") pod "8d928ba2-b289-4ede-96e5-a136771b99b1" (UID: "8d928ba2-b289-4ede-96e5-a136771b99b1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.751068 4900 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:46 crc kubenswrapper[4900]: I1202 13:56:46.751095 4900 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d928ba2-b289-4ede-96e5-a136771b99b1-util\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:47 crc kubenswrapper[4900]: I1202 13:56:47.151444 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" event={"ID":"8d928ba2-b289-4ede-96e5-a136771b99b1","Type":"ContainerDied","Data":"8ceda9549ecbfb44f5d5c496d4cb528fe5335bd47ce7147aaf5c05ab356cf7a2"} Dec 02 13:56:47 crc kubenswrapper[4900]: I1202 13:56:47.151832 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ceda9549ecbfb44f5d5c496d4cb528fe5335bd47ce7147aaf5c05ab356cf7a2" Dec 02 13:56:47 crc kubenswrapper[4900]: I1202 13:56:47.151560 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.119615 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2"] Dec 02 13:56:55 crc kubenswrapper[4900]: E1202 13:56:55.121986 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.122116 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" Dec 02 13:56:55 crc kubenswrapper[4900]: E1202 13:56:55.122226 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="pull" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.122334 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="pull" Dec 02 13:56:55 crc kubenswrapper[4900]: E1202 13:56:55.122451 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="extract" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.122548 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="extract" Dec 02 13:56:55 crc kubenswrapper[4900]: E1202 13:56:55.122677 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="util" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.122758 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="util" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.122950 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d928ba2-b289-4ede-96e5-a136771b99b1" containerName="extract" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.123034 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="95616fe1-4979-433d-afce-3235d5dab8a5" containerName="console" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.123576 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.127814 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hpgt2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.128238 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.128254 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.128622 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.128960 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.137639 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2"] Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.190911 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-apiservice-cert\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.191191 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-webhook-cert\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.191308 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv89z\" (UniqueName: \"kubernetes.io/projected/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-kube-api-access-lv89z\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.292689 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-apiservice-cert\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.292783 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-webhook-cert\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.292836 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv89z\" (UniqueName: \"kubernetes.io/projected/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-kube-api-access-lv89z\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.307204 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-apiservice-cert\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.308168 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-webhook-cert\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.319785 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv89z\" (UniqueName: \"kubernetes.io/projected/bf1a22ef-b575-4d5c-b109-3ec72f7eb657-kube-api-access-lv89z\") pod \"metallb-operator-controller-manager-5c5c9bf76c-rxvh2\" (UID: \"bf1a22ef-b575-4d5c-b109-3ec72f7eb657\") " pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.370717 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw"] Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.371511 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.373852 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.373918 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lnzmk" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.387974 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.396394 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6dn\" (UniqueName: \"kubernetes.io/projected/33794e3c-e37e-4e6c-b384-19cea6e2ce59-kube-api-access-zl6dn\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.396480 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw"] Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.396498 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33794e3c-e37e-4e6c-b384-19cea6e2ce59-webhook-cert\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.396527 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33794e3c-e37e-4e6c-b384-19cea6e2ce59-apiservice-cert\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.448494 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.497530 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33794e3c-e37e-4e6c-b384-19cea6e2ce59-webhook-cert\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.497939 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33794e3c-e37e-4e6c-b384-19cea6e2ce59-apiservice-cert\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.498011 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6dn\" (UniqueName: \"kubernetes.io/projected/33794e3c-e37e-4e6c-b384-19cea6e2ce59-kube-api-access-zl6dn\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.501667 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33794e3c-e37e-4e6c-b384-19cea6e2ce59-webhook-cert\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.501729 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33794e3c-e37e-4e6c-b384-19cea6e2ce59-apiservice-cert\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.521351 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6dn\" (UniqueName: \"kubernetes.io/projected/33794e3c-e37e-4e6c-b384-19cea6e2ce59-kube-api-access-zl6dn\") pod \"metallb-operator-webhook-server-57dfb79cdb-sqljw\" (UID: \"33794e3c-e37e-4e6c-b384-19cea6e2ce59\") " pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.689122 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:56:55 crc kubenswrapper[4900]: I1202 13:56:55.908372 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2"] Dec 02 13:56:56 crc kubenswrapper[4900]: I1202 13:56:56.005238 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw"] Dec 02 13:56:56 crc kubenswrapper[4900]: W1202 13:56:56.008291 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33794e3c_e37e_4e6c_b384_19cea6e2ce59.slice/crio-d2cb2b2e9d66c961cadf39f4b2b58498119aba42f4f60597deee8ac0a683abc1 WatchSource:0}: Error finding container d2cb2b2e9d66c961cadf39f4b2b58498119aba42f4f60597deee8ac0a683abc1: Status 404 returned error can't find the container with id d2cb2b2e9d66c961cadf39f4b2b58498119aba42f4f60597deee8ac0a683abc1 Dec 02 13:56:56 crc kubenswrapper[4900]: I1202 13:56:56.208768 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" event={"ID":"bf1a22ef-b575-4d5c-b109-3ec72f7eb657","Type":"ContainerStarted","Data":"7813350445b5edc7ade7abf54ed69759b52f818648026869f5ce6b1c2ea07feb"} Dec 02 13:56:56 crc kubenswrapper[4900]: I1202 13:56:56.209975 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" event={"ID":"33794e3c-e37e-4e6c-b384-19cea6e2ce59","Type":"ContainerStarted","Data":"d2cb2b2e9d66c961cadf39f4b2b58498119aba42f4f60597deee8ac0a683abc1"} Dec 02 13:57:01 crc kubenswrapper[4900]: I1202 13:57:01.242844 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" event={"ID":"bf1a22ef-b575-4d5c-b109-3ec72f7eb657","Type":"ContainerStarted","Data":"fdaf6c541e972a64064306451dbf03323f8acf238c2bc873109cb034e56127e8"} Dec 02 13:57:01 crc kubenswrapper[4900]: I1202 13:57:01.244608 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:57:01 crc kubenswrapper[4900]: I1202 13:57:01.244722 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" event={"ID":"33794e3c-e37e-4e6c-b384-19cea6e2ce59","Type":"ContainerStarted","Data":"c21074fc197a8ab3aab9c10fac94b90efaf32b627b8af5a3e1c36217fc8f64a6"} Dec 02 13:57:01 crc kubenswrapper[4900]: I1202 13:57:01.244866 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:57:01 crc kubenswrapper[4900]: I1202 13:57:01.262010 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" podStartSLOduration=1.3098397259999999 podStartE2EDuration="6.261972875s" podCreationTimestamp="2025-12-02 13:56:55 +0000 UTC" firstStartedPulling="2025-12-02 13:56:55.915903524 +0000 UTC m=+861.331717415" lastFinishedPulling="2025-12-02 13:57:00.868036723 +0000 UTC m=+866.283850564" observedRunningTime="2025-12-02 13:57:01.261887633 +0000 UTC m=+866.677701524" watchObservedRunningTime="2025-12-02 13:57:01.261972875 +0000 UTC m=+866.677786736" Dec 02 13:57:01 crc kubenswrapper[4900]: I1202 13:57:01.298519 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" podStartSLOduration=1.424737336 podStartE2EDuration="6.298492802s" podCreationTimestamp="2025-12-02 13:56:55 +0000 UTC" firstStartedPulling="2025-12-02 13:56:56.010778087 +0000 UTC m=+861.426591938" lastFinishedPulling="2025-12-02 13:57:00.884533553 +0000 UTC m=+866.300347404" observedRunningTime="2025-12-02 13:57:01.2973162 +0000 UTC m=+866.713130051" watchObservedRunningTime="2025-12-02 13:57:01.298492802 +0000 UTC m=+866.714306683" Dec 02 13:57:15 crc kubenswrapper[4900]: I1202 13:57:15.320573 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjn9h"] Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.324352 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.350820 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjn9h"] Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.512421 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-catalog-content\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.512740 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvlk\" (UniqueName: \"kubernetes.io/projected/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-kube-api-access-khvlk\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.512857 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-utilities\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.613519 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-utilities\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.613586 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-catalog-content\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.613609 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khvlk\" (UniqueName: \"kubernetes.io/projected/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-kube-api-access-khvlk\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.614033 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-utilities\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.614062 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-catalog-content\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.639472 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khvlk\" (UniqueName: \"kubernetes.io/projected/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-kube-api-access-khvlk\") pod \"community-operators-qjn9h\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.650971 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:16 crc kubenswrapper[4900]: I1202 13:57:15.699564 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-57dfb79cdb-sqljw" Dec 02 13:57:17 crc kubenswrapper[4900]: I1202 13:57:17.044409 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjn9h"] Dec 02 13:57:17 crc kubenswrapper[4900]: I1202 13:57:17.357695 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjn9h" event={"ID":"a871f7d5-b9b2-4d35-a111-0ada11b6d21f","Type":"ContainerStarted","Data":"e2a29b55da69137ae375f50db36d26a64a45577ccee16b317f874bea00d33e75"} Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.305620 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vnv2f"] Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.310295 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.318110 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnv2f"] Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.367097 4900 generic.go:334] "Generic (PLEG): container finished" podID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerID="a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd" exitCode=0 Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.367155 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjn9h" event={"ID":"a871f7d5-b9b2-4d35-a111-0ada11b6d21f","Type":"ContainerDied","Data":"a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd"} Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.449963 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpvbt\" (UniqueName: \"kubernetes.io/projected/3816a98d-b06a-4d86-8f4f-3f399c8f912a-kube-api-access-wpvbt\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.450046 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-utilities\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.450122 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-catalog-content\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.551254 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-utilities\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.551490 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-catalog-content\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.551597 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpvbt\" (UniqueName: \"kubernetes.io/projected/3816a98d-b06a-4d86-8f4f-3f399c8f912a-kube-api-access-wpvbt\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.551994 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-utilities\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.552300 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-catalog-content\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.574933 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpvbt\" (UniqueName: \"kubernetes.io/projected/3816a98d-b06a-4d86-8f4f-3f399c8f912a-kube-api-access-wpvbt\") pod \"redhat-marketplace-vnv2f\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.678214 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:18 crc kubenswrapper[4900]: I1202 13:57:18.893292 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnv2f"] Dec 02 13:57:19 crc kubenswrapper[4900]: I1202 13:57:19.372379 4900 generic.go:334] "Generic (PLEG): container finished" podID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerID="81f524c548e4b76ca41ab2ef0522aeb604507347a37c267511c642914f7f56dd" exitCode=0 Dec 02 13:57:19 crc kubenswrapper[4900]: I1202 13:57:19.372474 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnv2f" event={"ID":"3816a98d-b06a-4d86-8f4f-3f399c8f912a","Type":"ContainerDied","Data":"81f524c548e4b76ca41ab2ef0522aeb604507347a37c267511c642914f7f56dd"} Dec 02 13:57:19 crc kubenswrapper[4900]: I1202 13:57:19.372677 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnv2f" event={"ID":"3816a98d-b06a-4d86-8f4f-3f399c8f912a","Type":"ContainerStarted","Data":"da1dafabd2978d4336734f0a6242090a1901fb8b7b3004dff9790b24f1dceeae"} Dec 02 13:57:20 crc kubenswrapper[4900]: I1202 13:57:20.383504 4900 generic.go:334] "Generic (PLEG): container finished" podID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerID="42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7" exitCode=0 Dec 02 13:57:20 crc kubenswrapper[4900]: I1202 13:57:20.383563 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjn9h" event={"ID":"a871f7d5-b9b2-4d35-a111-0ada11b6d21f","Type":"ContainerDied","Data":"42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7"} Dec 02 13:57:21 crc kubenswrapper[4900]: I1202 13:57:21.392743 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjn9h" event={"ID":"a871f7d5-b9b2-4d35-a111-0ada11b6d21f","Type":"ContainerStarted","Data":"edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13"} Dec 02 13:57:21 crc kubenswrapper[4900]: I1202 13:57:21.394351 4900 generic.go:334] "Generic (PLEG): container finished" podID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerID="fc6c1c88edac9d628a2a9119320a5b0c7961028b6b322e56599c03f0eb3dab5e" exitCode=0 Dec 02 13:57:21 crc kubenswrapper[4900]: I1202 13:57:21.394406 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnv2f" event={"ID":"3816a98d-b06a-4d86-8f4f-3f399c8f912a","Type":"ContainerDied","Data":"fc6c1c88edac9d628a2a9119320a5b0c7961028b6b322e56599c03f0eb3dab5e"} Dec 02 13:57:21 crc kubenswrapper[4900]: I1202 13:57:21.433598 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjn9h" podStartSLOduration=3.961267023 podStartE2EDuration="6.433576423s" podCreationTimestamp="2025-12-02 13:57:15 +0000 UTC" firstStartedPulling="2025-12-02 13:57:18.368990198 +0000 UTC m=+883.784804059" lastFinishedPulling="2025-12-02 13:57:20.841299598 +0000 UTC m=+886.257113459" observedRunningTime="2025-12-02 13:57:21.415109089 +0000 UTC m=+886.830922940" watchObservedRunningTime="2025-12-02 13:57:21.433576423 +0000 UTC m=+886.849390274" Dec 02 13:57:22 crc kubenswrapper[4900]: I1202 13:57:22.405719 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnv2f" event={"ID":"3816a98d-b06a-4d86-8f4f-3f399c8f912a","Type":"ContainerStarted","Data":"9b6981d3ef657e09fcc52c666e590bf3cdc4046c687271459cc7ad19fb82fffd"} Dec 02 13:57:22 crc kubenswrapper[4900]: I1202 13:57:22.430030 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vnv2f" podStartSLOduration=1.803635688 podStartE2EDuration="4.430001747s" podCreationTimestamp="2025-12-02 13:57:18 +0000 UTC" firstStartedPulling="2025-12-02 13:57:19.373767784 +0000 UTC m=+884.789581645" lastFinishedPulling="2025-12-02 13:57:22.000133843 +0000 UTC m=+887.415947704" observedRunningTime="2025-12-02 13:57:22.422324273 +0000 UTC m=+887.838138184" watchObservedRunningTime="2025-12-02 13:57:22.430001747 +0000 UTC m=+887.845815638" Dec 02 13:57:25 crc kubenswrapper[4900]: I1202 13:57:25.652055 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:25 crc kubenswrapper[4900]: I1202 13:57:25.652446 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:25 crc kubenswrapper[4900]: I1202 13:57:25.724214 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:26 crc kubenswrapper[4900]: I1202 13:57:26.505754 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:27 crc kubenswrapper[4900]: I1202 13:57:27.495750 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjn9h"] Dec 02 13:57:28 crc kubenswrapper[4900]: I1202 13:57:28.449287 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qjn9h" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="registry-server" containerID="cri-o://edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13" gracePeriod=2 Dec 02 13:57:28 crc kubenswrapper[4900]: I1202 13:57:28.710852 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:28 crc kubenswrapper[4900]: I1202 13:57:28.711376 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:28 crc kubenswrapper[4900]: I1202 13:57:28.779462 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:28 crc kubenswrapper[4900]: I1202 13:57:28.913187 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.014624 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-utilities\") pod \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.014734 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khvlk\" (UniqueName: \"kubernetes.io/projected/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-kube-api-access-khvlk\") pod \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.014833 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-catalog-content\") pod \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\" (UID: \"a871f7d5-b9b2-4d35-a111-0ada11b6d21f\") " Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.016109 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-utilities" (OuterVolumeSpecName: "utilities") pod "a871f7d5-b9b2-4d35-a111-0ada11b6d21f" (UID: "a871f7d5-b9b2-4d35-a111-0ada11b6d21f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.026193 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-kube-api-access-khvlk" (OuterVolumeSpecName: "kube-api-access-khvlk") pod "a871f7d5-b9b2-4d35-a111-0ada11b6d21f" (UID: "a871f7d5-b9b2-4d35-a111-0ada11b6d21f"). InnerVolumeSpecName "kube-api-access-khvlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.097614 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a871f7d5-b9b2-4d35-a111-0ada11b6d21f" (UID: "a871f7d5-b9b2-4d35-a111-0ada11b6d21f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.116101 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.116144 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.116157 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khvlk\" (UniqueName: \"kubernetes.io/projected/a871f7d5-b9b2-4d35-a111-0ada11b6d21f-kube-api-access-khvlk\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.461900 4900 generic.go:334] "Generic (PLEG): container finished" podID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerID="edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13" exitCode=0 Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.461988 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjn9h" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.461995 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjn9h" event={"ID":"a871f7d5-b9b2-4d35-a111-0ada11b6d21f","Type":"ContainerDied","Data":"edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13"} Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.462067 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjn9h" event={"ID":"a871f7d5-b9b2-4d35-a111-0ada11b6d21f","Type":"ContainerDied","Data":"e2a29b55da69137ae375f50db36d26a64a45577ccee16b317f874bea00d33e75"} Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.462094 4900 scope.go:117] "RemoveContainer" containerID="edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.488705 4900 scope.go:117] "RemoveContainer" containerID="42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.517546 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjn9h"] Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.536390 4900 scope.go:117] "RemoveContainer" containerID="a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.543431 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qjn9h"] Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.546672 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.576230 4900 scope.go:117] "RemoveContainer" containerID="edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13" Dec 02 13:57:29 crc kubenswrapper[4900]: E1202 13:57:29.577406 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13\": container with ID starting with edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13 not found: ID does not exist" containerID="edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.577459 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13"} err="failed to get container status \"edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13\": rpc error: code = NotFound desc = could not find container \"edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13\": container with ID starting with edd5496698aedfa9d3101960e80c52672b4833065166e62e534c4f0d6faaeb13 not found: ID does not exist" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.577493 4900 scope.go:117] "RemoveContainer" containerID="42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7" Dec 02 13:57:29 crc kubenswrapper[4900]: E1202 13:57:29.577942 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7\": container with ID starting with 42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7 not found: ID does not exist" containerID="42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.577980 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7"} err="failed to get container status \"42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7\": rpc error: code = NotFound desc = could not find container \"42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7\": container with ID starting with 42599eb49a5a5157830f8e0cc9884b8f41fde8de577f5cb244654e24e405efb7 not found: ID does not exist" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.578007 4900 scope.go:117] "RemoveContainer" containerID="a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd" Dec 02 13:57:29 crc kubenswrapper[4900]: E1202 13:57:29.578423 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd\": container with ID starting with a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd not found: ID does not exist" containerID="a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd" Dec 02 13:57:29 crc kubenswrapper[4900]: I1202 13:57:29.578462 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd"} err="failed to get container status \"a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd\": rpc error: code = NotFound desc = could not find container \"a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd\": container with ID starting with a7e85531eee53223db7d792ae3a6a83974f1919cac79fe392e40dc988d2ce9fd not found: ID does not exist" Dec 02 13:57:30 crc kubenswrapper[4900]: I1202 13:57:30.918057 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" path="/var/lib/kubelet/pods/a871f7d5-b9b2-4d35-a111-0ada11b6d21f/volumes" Dec 02 13:57:31 crc kubenswrapper[4900]: I1202 13:57:31.089242 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnv2f"] Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.486306 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vnv2f" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="registry-server" containerID="cri-o://9b6981d3ef657e09fcc52c666e590bf3cdc4046c687271459cc7ad19fb82fffd" gracePeriod=2 Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.907422 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j7hjs"] Dec 02 13:57:32 crc kubenswrapper[4900]: E1202 13:57:32.908080 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="extract-content" Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.908117 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="extract-content" Dec 02 13:57:32 crc kubenswrapper[4900]: E1202 13:57:32.908143 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="registry-server" Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.908157 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="registry-server" Dec 02 13:57:32 crc kubenswrapper[4900]: E1202 13:57:32.908173 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="extract-utilities" Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.908187 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="extract-utilities" Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.908392 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a871f7d5-b9b2-4d35-a111-0ada11b6d21f" containerName="registry-server" Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.910070 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:32 crc kubenswrapper[4900]: I1202 13:57:32.946782 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7hjs"] Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.086111 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkjv\" (UniqueName: \"kubernetes.io/projected/e7c1be85-3b50-400a-9b15-10f599f378ac-kube-api-access-rjkjv\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.086159 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-utilities\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.086271 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-catalog-content\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.187407 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkjv\" (UniqueName: \"kubernetes.io/projected/e7c1be85-3b50-400a-9b15-10f599f378ac-kube-api-access-rjkjv\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.187459 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-utilities\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.187514 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-catalog-content\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.188054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-utilities\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.188081 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-catalog-content\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.204535 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkjv\" (UniqueName: \"kubernetes.io/projected/e7c1be85-3b50-400a-9b15-10f599f378ac-kube-api-access-rjkjv\") pod \"certified-operators-j7hjs\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.313660 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.496459 4900 generic.go:334] "Generic (PLEG): container finished" podID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerID="9b6981d3ef657e09fcc52c666e590bf3cdc4046c687271459cc7ad19fb82fffd" exitCode=0 Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.496624 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnv2f" event={"ID":"3816a98d-b06a-4d86-8f4f-3f399c8f912a","Type":"ContainerDied","Data":"9b6981d3ef657e09fcc52c666e590bf3cdc4046c687271459cc7ad19fb82fffd"} Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.768622 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.802332 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j7hjs"] Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.895318 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-utilities\") pod \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.895355 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-catalog-content\") pod \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.895401 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpvbt\" (UniqueName: \"kubernetes.io/projected/3816a98d-b06a-4d86-8f4f-3f399c8f912a-kube-api-access-wpvbt\") pod \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\" (UID: \"3816a98d-b06a-4d86-8f4f-3f399c8f912a\") " Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.896216 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-utilities" (OuterVolumeSpecName: "utilities") pod "3816a98d-b06a-4d86-8f4f-3f399c8f912a" (UID: "3816a98d-b06a-4d86-8f4f-3f399c8f912a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.899904 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3816a98d-b06a-4d86-8f4f-3f399c8f912a-kube-api-access-wpvbt" (OuterVolumeSpecName: "kube-api-access-wpvbt") pod "3816a98d-b06a-4d86-8f4f-3f399c8f912a" (UID: "3816a98d-b06a-4d86-8f4f-3f399c8f912a"). InnerVolumeSpecName "kube-api-access-wpvbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.917066 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3816a98d-b06a-4d86-8f4f-3f399c8f912a" (UID: "3816a98d-b06a-4d86-8f4f-3f399c8f912a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.996854 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.996883 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3816a98d-b06a-4d86-8f4f-3f399c8f912a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:33 crc kubenswrapper[4900]: I1202 13:57:33.996894 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpvbt\" (UniqueName: \"kubernetes.io/projected/3816a98d-b06a-4d86-8f4f-3f399c8f912a-kube-api-access-wpvbt\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.506245 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnv2f" event={"ID":"3816a98d-b06a-4d86-8f4f-3f399c8f912a","Type":"ContainerDied","Data":"da1dafabd2978d4336734f0a6242090a1901fb8b7b3004dff9790b24f1dceeae"} Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.506292 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnv2f" Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.506340 4900 scope.go:117] "RemoveContainer" containerID="9b6981d3ef657e09fcc52c666e590bf3cdc4046c687271459cc7ad19fb82fffd" Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.517193 4900 generic.go:334] "Generic (PLEG): container finished" podID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerID="b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0" exitCode=0 Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.517255 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerDied","Data":"b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0"} Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.517346 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerStarted","Data":"283acc19892f4498dcc2a9aac8b72d99b440140fdb2bef0c44e592d05d61cea5"} Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.546679 4900 scope.go:117] "RemoveContainer" containerID="fc6c1c88edac9d628a2a9119320a5b0c7961028b6b322e56599c03f0eb3dab5e" Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.562354 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnv2f"] Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.569583 4900 scope.go:117] "RemoveContainer" containerID="81f524c548e4b76ca41ab2ef0522aeb604507347a37c267511c642914f7f56dd" Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.572461 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnv2f"] Dec 02 13:57:34 crc kubenswrapper[4900]: I1202 13:57:34.918623 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" path="/var/lib/kubelet/pods/3816a98d-b06a-4d86-8f4f-3f399c8f912a/volumes" Dec 02 13:57:35 crc kubenswrapper[4900]: I1202 13:57:35.453099 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" Dec 02 13:57:35 crc kubenswrapper[4900]: I1202 13:57:35.531261 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerStarted","Data":"4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631"} Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.306491 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-n75f5"] Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.306985 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="extract-content" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.307014 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="extract-content" Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.307043 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="extract-utilities" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.307058 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="extract-utilities" Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.307084 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="registry-server" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.307098 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="registry-server" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.307308 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3816a98d-b06a-4d86-8f4f-3f399c8f912a" containerName="registry-server" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.310740 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr"] Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.311342 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.311840 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.320109 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tdqbj" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.320139 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.321305 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.321385 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.325691 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr"] Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343684 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-metrics\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343739 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-sockets\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343781 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-conf\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343809 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf13f239-289a-466a-8198-b9d0045278ea-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343838 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-startup\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343871 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-reloader\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343905 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqz5\" (UniqueName: \"kubernetes.io/projected/3a65ca05-cf73-4b08-989e-0db3f81747a2-kube-api-access-jtqz5\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343938 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a65ca05-cf73-4b08-989e-0db3f81747a2-metrics-certs\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.343976 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwv97\" (UniqueName: \"kubernetes.io/projected/cf13f239-289a-466a-8198-b9d0045278ea-kube-api-access-qwv97\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.427425 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-c4q7g"] Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.428438 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.430939 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.431131 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.431336 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.431450 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nphqn" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444826 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-reloader\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444874 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqz5\" (UniqueName: \"kubernetes.io/projected/3a65ca05-cf73-4b08-989e-0db3f81747a2-kube-api-access-jtqz5\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444899 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmk72\" (UniqueName: \"kubernetes.io/projected/b7762b7c-1278-4ecd-a268-a87332a08d60-kube-api-access-tmk72\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444920 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b7762b7c-1278-4ecd-a268-a87332a08d60-metallb-excludel2\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444942 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-metrics-certs\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444961 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a65ca05-cf73-4b08-989e-0db3f81747a2-metrics-certs\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.444982 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwv97\" (UniqueName: \"kubernetes.io/projected/cf13f239-289a-466a-8198-b9d0045278ea-kube-api-access-qwv97\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445005 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-metrics\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445030 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-sockets\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445045 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-conf\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445092 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf13f239-289a-466a-8198-b9d0045278ea-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445130 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-startup\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.445965 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-startup\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.446180 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-reloader\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.446428 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-conf\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.446450 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-metrics\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.446484 4900 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.446524 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf13f239-289a-466a-8198-b9d0045278ea-cert podName:cf13f239-289a-466a-8198-b9d0045278ea nodeName:}" failed. No retries permitted until 2025-12-02 13:57:36.946510771 +0000 UTC m=+902.362324622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cf13f239-289a-466a-8198-b9d0045278ea-cert") pod "frr-k8s-webhook-server-7fcb986d4-qpbtr" (UID: "cf13f239-289a-466a-8198-b9d0045278ea") : secret "frr-k8s-webhook-server-cert" not found Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.446658 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3a65ca05-cf73-4b08-989e-0db3f81747a2-frr-sockets\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.455260 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a65ca05-cf73-4b08-989e-0db3f81747a2-metrics-certs\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.456452 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-4w4nt"] Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.457323 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.461896 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.471287 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-4w4nt"] Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.483262 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqz5\" (UniqueName: \"kubernetes.io/projected/3a65ca05-cf73-4b08-989e-0db3f81747a2-kube-api-access-jtqz5\") pod \"frr-k8s-n75f5\" (UID: \"3a65ca05-cf73-4b08-989e-0db3f81747a2\") " pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.487771 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwv97\" (UniqueName: \"kubernetes.io/projected/cf13f239-289a-466a-8198-b9d0045278ea-kube-api-access-qwv97\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.537985 4900 generic.go:334] "Generic (PLEG): container finished" podID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerID="4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631" exitCode=0 Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.538022 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerDied","Data":"4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631"} Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.545883 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-metrics-certs\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.545941 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.545992 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkds\" (UniqueName: \"kubernetes.io/projected/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-kube-api-access-kdkds\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.546024 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-cert\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.546058 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmk72\" (UniqueName: \"kubernetes.io/projected/b7762b7c-1278-4ecd-a268-a87332a08d60-kube-api-access-tmk72\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.546074 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b7762b7c-1278-4ecd-a268-a87332a08d60-metallb-excludel2\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.546083 4900 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.546127 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist podName:b7762b7c-1278-4ecd-a268-a87332a08d60 nodeName:}" failed. No retries permitted until 2025-12-02 13:57:37.046112415 +0000 UTC m=+902.461926256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist") pod "speaker-c4q7g" (UID: "b7762b7c-1278-4ecd-a268-a87332a08d60") : secret "metallb-memberlist" not found Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.546091 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-metrics-certs\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.546859 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b7762b7c-1278-4ecd-a268-a87332a08d60-metallb-excludel2\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.552054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-metrics-certs\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.570269 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmk72\" (UniqueName: \"kubernetes.io/projected/b7762b7c-1278-4ecd-a268-a87332a08d60-kube-api-access-tmk72\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.647040 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-metrics-certs\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.647138 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkds\" (UniqueName: \"kubernetes.io/projected/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-kube-api-access-kdkds\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.647167 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-cert\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.647197 4900 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 02 13:57:36 crc kubenswrapper[4900]: E1202 13:57:36.647284 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-metrics-certs podName:32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4 nodeName:}" failed. No retries permitted until 2025-12-02 13:57:37.147263573 +0000 UTC m=+902.563077414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-metrics-certs") pod "controller-f8648f98b-4w4nt" (UID: "32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4") : secret "controller-certs-secret" not found Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.650810 4900 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.652170 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.664190 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-cert\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.669296 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkds\" (UniqueName: \"kubernetes.io/projected/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-kube-api-access-kdkds\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.951610 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf13f239-289a-466a-8198-b9d0045278ea-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:36 crc kubenswrapper[4900]: I1202 13:57:36.956219 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cf13f239-289a-466a-8198-b9d0045278ea-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-qpbtr\" (UID: \"cf13f239-289a-466a-8198-b9d0045278ea\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.052740 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:37 crc kubenswrapper[4900]: E1202 13:57:37.052901 4900 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 13:57:37 crc kubenswrapper[4900]: E1202 13:57:37.052956 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist podName:b7762b7c-1278-4ecd-a268-a87332a08d60 nodeName:}" failed. No retries permitted until 2025-12-02 13:57:38.052939532 +0000 UTC m=+903.468753393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist") pod "speaker-c4q7g" (UID: "b7762b7c-1278-4ecd-a268-a87332a08d60") : secret "metallb-memberlist" not found Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.153705 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-metrics-certs\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.158261 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4-metrics-certs\") pod \"controller-f8648f98b-4w4nt\" (UID: \"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4\") " pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.241293 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.432382 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.565856 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerStarted","Data":"4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76"} Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.572170 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"2566a1c21246d144338356f1062376678263d31235535e627cb231eacd1dccf4"} Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.583162 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j7hjs" podStartSLOduration=3.058300017 podStartE2EDuration="5.58314756s" podCreationTimestamp="2025-12-02 13:57:32 +0000 UTC" firstStartedPulling="2025-12-02 13:57:34.522054371 +0000 UTC m=+899.937868262" lastFinishedPulling="2025-12-02 13:57:37.046901944 +0000 UTC m=+902.462715805" observedRunningTime="2025-12-02 13:57:37.579158128 +0000 UTC m=+902.994971979" watchObservedRunningTime="2025-12-02 13:57:37.58314756 +0000 UTC m=+902.998961411" Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.737953 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr"] Dec 02 13:57:37 crc kubenswrapper[4900]: W1202 13:57:37.750437 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf13f239_289a_466a_8198_b9d0045278ea.slice/crio-6b99fc2c2744163a558118fcf32bb1b136c12b9462b350d8559c09bb6aed5db2 WatchSource:0}: Error finding container 6b99fc2c2744163a558118fcf32bb1b136c12b9462b350d8559c09bb6aed5db2: Status 404 returned error can't find the container with id 6b99fc2c2744163a558118fcf32bb1b136c12b9462b350d8559c09bb6aed5db2 Dec 02 13:57:37 crc kubenswrapper[4900]: I1202 13:57:37.887525 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-4w4nt"] Dec 02 13:57:37 crc kubenswrapper[4900]: W1202 13:57:37.895835 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32340cf3_dbfb_4e8b_bc67_9da7c13ac1f4.slice/crio-5599009e2566298310013726906c60d0bc389e1e29078b348eeef3a5e132a4d6 WatchSource:0}: Error finding container 5599009e2566298310013726906c60d0bc389e1e29078b348eeef3a5e132a4d6: Status 404 returned error can't find the container with id 5599009e2566298310013726906c60d0bc389e1e29078b348eeef3a5e132a4d6 Dec 02 13:57:38 crc kubenswrapper[4900]: I1202 13:57:38.102148 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:38 crc kubenswrapper[4900]: E1202 13:57:38.102464 4900 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 13:57:38 crc kubenswrapper[4900]: E1202 13:57:38.102547 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist podName:b7762b7c-1278-4ecd-a268-a87332a08d60 nodeName:}" failed. No retries permitted until 2025-12-02 13:57:40.102527186 +0000 UTC m=+905.518341047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist") pod "speaker-c4q7g" (UID: "b7762b7c-1278-4ecd-a268-a87332a08d60") : secret "metallb-memberlist" not found Dec 02 13:57:38 crc kubenswrapper[4900]: I1202 13:57:38.581048 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" event={"ID":"cf13f239-289a-466a-8198-b9d0045278ea","Type":"ContainerStarted","Data":"6b99fc2c2744163a558118fcf32bb1b136c12b9462b350d8559c09bb6aed5db2"} Dec 02 13:57:38 crc kubenswrapper[4900]: I1202 13:57:38.584481 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-4w4nt" event={"ID":"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4","Type":"ContainerStarted","Data":"995341ce873227293abe92243caf7b451f205c551c49d5d1a8d72c42da489840"} Dec 02 13:57:38 crc kubenswrapper[4900]: I1202 13:57:38.584500 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-4w4nt" event={"ID":"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4","Type":"ContainerStarted","Data":"5fdd49a55581deb2ce3bcba4d939f4f7613331cb1df47309531c6b903be3e20b"} Dec 02 13:57:38 crc kubenswrapper[4900]: I1202 13:57:38.584510 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-4w4nt" event={"ID":"32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4","Type":"ContainerStarted","Data":"5599009e2566298310013726906c60d0bc389e1e29078b348eeef3a5e132a4d6"} Dec 02 13:57:38 crc kubenswrapper[4900]: I1202 13:57:38.584530 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:40 crc kubenswrapper[4900]: I1202 13:57:40.139357 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:40 crc kubenswrapper[4900]: I1202 13:57:40.160312 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b7762b7c-1278-4ecd-a268-a87332a08d60-memberlist\") pod \"speaker-c4q7g\" (UID: \"b7762b7c-1278-4ecd-a268-a87332a08d60\") " pod="metallb-system/speaker-c4q7g" Dec 02 13:57:40 crc kubenswrapper[4900]: I1202 13:57:40.347622 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-c4q7g" Dec 02 13:57:40 crc kubenswrapper[4900]: I1202 13:57:40.597668 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-c4q7g" event={"ID":"b7762b7c-1278-4ecd-a268-a87332a08d60","Type":"ContainerStarted","Data":"eba8e5be15617dbeb7b9c2c88083762c5a85b7fbadd6bfd0ab4d01a54e418b35"} Dec 02 13:57:41 crc kubenswrapper[4900]: I1202 13:57:41.609818 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-c4q7g" event={"ID":"b7762b7c-1278-4ecd-a268-a87332a08d60","Type":"ContainerStarted","Data":"fb160b40c71afd1504098769f5826473403da21a9075b453310c97cc9c73397d"} Dec 02 13:57:42 crc kubenswrapper[4900]: I1202 13:57:42.620792 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-c4q7g" event={"ID":"b7762b7c-1278-4ecd-a268-a87332a08d60","Type":"ContainerStarted","Data":"5788edf3c4b7f3603d8bc02c59374541470d6ec1a305be44eb29a72417553196"} Dec 02 13:57:42 crc kubenswrapper[4900]: I1202 13:57:42.620957 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-c4q7g" Dec 02 13:57:42 crc kubenswrapper[4900]: I1202 13:57:42.645201 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-c4q7g" podStartSLOduration=6.64518625 podStartE2EDuration="6.64518625s" podCreationTimestamp="2025-12-02 13:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:57:42.638897075 +0000 UTC m=+908.054710926" watchObservedRunningTime="2025-12-02 13:57:42.64518625 +0000 UTC m=+908.061000101" Dec 02 13:57:42 crc kubenswrapper[4900]: I1202 13:57:42.646231 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-4w4nt" podStartSLOduration=6.646225929 podStartE2EDuration="6.646225929s" podCreationTimestamp="2025-12-02 13:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:57:38.605837524 +0000 UTC m=+904.021651365" watchObservedRunningTime="2025-12-02 13:57:42.646225929 +0000 UTC m=+908.062039780" Dec 02 13:57:43 crc kubenswrapper[4900]: I1202 13:57:43.314761 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:43 crc kubenswrapper[4900]: I1202 13:57:43.315778 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:43 crc kubenswrapper[4900]: I1202 13:57:43.383555 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:43 crc kubenswrapper[4900]: I1202 13:57:43.670170 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:43 crc kubenswrapper[4900]: I1202 13:57:43.709350 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7hjs"] Dec 02 13:57:45 crc kubenswrapper[4900]: I1202 13:57:45.116475 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:57:45 crc kubenswrapper[4900]: I1202 13:57:45.116886 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:57:45 crc kubenswrapper[4900]: I1202 13:57:45.639436 4900 generic.go:334] "Generic (PLEG): container finished" podID="3a65ca05-cf73-4b08-989e-0db3f81747a2" containerID="0b6efb3ae522657bc7f69a1c8797264a910aa7834387ff1eede7a2e7d1b375c0" exitCode=0 Dec 02 13:57:45 crc kubenswrapper[4900]: I1202 13:57:45.639728 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerDied","Data":"0b6efb3ae522657bc7f69a1c8797264a910aa7834387ff1eede7a2e7d1b375c0"} Dec 02 13:57:45 crc kubenswrapper[4900]: I1202 13:57:45.641030 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" event={"ID":"cf13f239-289a-466a-8198-b9d0045278ea","Type":"ContainerStarted","Data":"c7ab7d71ac1b2a37e9142d4df4f2b90594c24f364b5bd6a0e1b41c0ea269ae51"} Dec 02 13:57:45 crc kubenswrapper[4900]: I1202 13:57:45.641195 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j7hjs" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="registry-server" containerID="cri-o://4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76" gracePeriod=2 Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.057975 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.069813 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-utilities\") pod \"e7c1be85-3b50-400a-9b15-10f599f378ac\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.069898 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkjv\" (UniqueName: \"kubernetes.io/projected/e7c1be85-3b50-400a-9b15-10f599f378ac-kube-api-access-rjkjv\") pod \"e7c1be85-3b50-400a-9b15-10f599f378ac\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.069928 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-catalog-content\") pod \"e7c1be85-3b50-400a-9b15-10f599f378ac\" (UID: \"e7c1be85-3b50-400a-9b15-10f599f378ac\") " Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.071094 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-utilities" (OuterVolumeSpecName: "utilities") pod "e7c1be85-3b50-400a-9b15-10f599f378ac" (UID: "e7c1be85-3b50-400a-9b15-10f599f378ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.076528 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c1be85-3b50-400a-9b15-10f599f378ac-kube-api-access-rjkjv" (OuterVolumeSpecName: "kube-api-access-rjkjv") pod "e7c1be85-3b50-400a-9b15-10f599f378ac" (UID: "e7c1be85-3b50-400a-9b15-10f599f378ac"). InnerVolumeSpecName "kube-api-access-rjkjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.086637 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" podStartSLOduration=2.6609839859999997 podStartE2EDuration="10.086617824s" podCreationTimestamp="2025-12-02 13:57:36 +0000 UTC" firstStartedPulling="2025-12-02 13:57:37.751992882 +0000 UTC m=+903.167806733" lastFinishedPulling="2025-12-02 13:57:45.17762671 +0000 UTC m=+910.593440571" observedRunningTime="2025-12-02 13:57:45.6931237 +0000 UTC m=+911.108937571" watchObservedRunningTime="2025-12-02 13:57:46.086617824 +0000 UTC m=+911.502431675" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.171781 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.171810 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkjv\" (UniqueName: \"kubernetes.io/projected/e7c1be85-3b50-400a-9b15-10f599f378ac-kube-api-access-rjkjv\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.222273 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7c1be85-3b50-400a-9b15-10f599f378ac" (UID: "e7c1be85-3b50-400a-9b15-10f599f378ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.273104 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c1be85-3b50-400a-9b15-10f599f378ac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.649816 4900 generic.go:334] "Generic (PLEG): container finished" podID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerID="4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76" exitCode=0 Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.650101 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerDied","Data":"4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76"} Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.650187 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j7hjs" event={"ID":"e7c1be85-3b50-400a-9b15-10f599f378ac","Type":"ContainerDied","Data":"283acc19892f4498dcc2a9aac8b72d99b440140fdb2bef0c44e592d05d61cea5"} Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.650213 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j7hjs" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.650222 4900 scope.go:117] "RemoveContainer" containerID="4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.652522 4900 generic.go:334] "Generic (PLEG): container finished" podID="3a65ca05-cf73-4b08-989e-0db3f81747a2" containerID="293cb0bde19d102e22036c206494c831289362c7a35bcef5586706f71d9a2b3e" exitCode=0 Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.653545 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerDied","Data":"293cb0bde19d102e22036c206494c831289362c7a35bcef5586706f71d9a2b3e"} Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.653573 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.673040 4900 scope.go:117] "RemoveContainer" containerID="4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.720296 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j7hjs"] Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.727698 4900 scope.go:117] "RemoveContainer" containerID="b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.747486 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j7hjs"] Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.791028 4900 scope.go:117] "RemoveContainer" containerID="4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76" Dec 02 13:57:46 crc kubenswrapper[4900]: E1202 13:57:46.791596 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76\": container with ID starting with 4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76 not found: ID does not exist" containerID="4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.791736 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76"} err="failed to get container status \"4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76\": rpc error: code = NotFound desc = could not find container \"4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76\": container with ID starting with 4a231abb996416feaaa494d457dccf92f199c48371908064c5beb47d3b854c76 not found: ID does not exist" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.791772 4900 scope.go:117] "RemoveContainer" containerID="4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631" Dec 02 13:57:46 crc kubenswrapper[4900]: E1202 13:57:46.792415 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631\": container with ID starting with 4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631 not found: ID does not exist" containerID="4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.792441 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631"} err="failed to get container status \"4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631\": rpc error: code = NotFound desc = could not find container \"4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631\": container with ID starting with 4f3fedc58dc8cc785eb4eb0fce20ef1616f8d9fe6c2594376b4450110233c631 not found: ID does not exist" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.792462 4900 scope.go:117] "RemoveContainer" containerID="b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0" Dec 02 13:57:46 crc kubenswrapper[4900]: E1202 13:57:46.792822 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0\": container with ID starting with b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0 not found: ID does not exist" containerID="b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.792855 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0"} err="failed to get container status \"b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0\": rpc error: code = NotFound desc = could not find container \"b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0\": container with ID starting with b1ef61a6426312cdd6c8997d8fc90f5c0a75754a6a4a23b31cde501c73933da0 not found: ID does not exist" Dec 02 13:57:46 crc kubenswrapper[4900]: I1202 13:57:46.921268 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" path="/var/lib/kubelet/pods/e7c1be85-3b50-400a-9b15-10f599f378ac/volumes" Dec 02 13:57:47 crc kubenswrapper[4900]: I1202 13:57:47.664675 4900 generic.go:334] "Generic (PLEG): container finished" podID="3a65ca05-cf73-4b08-989e-0db3f81747a2" containerID="dc8cd5bd1efd0bdc0f1456396c26da08865aceae2efaa718c566f30e79b793ed" exitCode=0 Dec 02 13:57:47 crc kubenswrapper[4900]: I1202 13:57:47.666347 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerDied","Data":"dc8cd5bd1efd0bdc0f1456396c26da08865aceae2efaa718c566f30e79b793ed"} Dec 02 13:57:48 crc kubenswrapper[4900]: I1202 13:57:48.677316 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"a673194d10a0191cc15fb03dc81bd9ec6ca0bd69b6dbb43e24ef19c35feb5d62"} Dec 02 13:57:48 crc kubenswrapper[4900]: I1202 13:57:48.677671 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"3f323288c67b286c86a0fb59acb125d5bb547c5bd4f24281acb3d91c4a634890"} Dec 02 13:57:48 crc kubenswrapper[4900]: I1202 13:57:48.677690 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"ad1adda3e362e83796a575fdc734ef1d7b69ff203ee2f2b9df2a385ad743af91"} Dec 02 13:57:48 crc kubenswrapper[4900]: I1202 13:57:48.677702 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"d501f059f47fbfcc321faced5f7da63539f92b29864e0f0f9b620028098e26bd"} Dec 02 13:57:49 crc kubenswrapper[4900]: I1202 13:57:49.692565 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"219f472b55fe609e3833f53ce9b3976abce2366f2766402614c5920676e390ac"} Dec 02 13:57:49 crc kubenswrapper[4900]: I1202 13:57:49.693023 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n75f5" event={"ID":"3a65ca05-cf73-4b08-989e-0db3f81747a2","Type":"ContainerStarted","Data":"36a641a6b1b70c124222dc62cf194309690eea2598e7e924e2cc61e70620e100"} Dec 02 13:57:49 crc kubenswrapper[4900]: I1202 13:57:49.693065 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:49 crc kubenswrapper[4900]: I1202 13:57:49.733380 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-n75f5" podStartSLOduration=5.355209182 podStartE2EDuration="13.733352389s" podCreationTimestamp="2025-12-02 13:57:36 +0000 UTC" firstStartedPulling="2025-12-02 13:57:36.78503131 +0000 UTC m=+902.200845161" lastFinishedPulling="2025-12-02 13:57:45.163174477 +0000 UTC m=+910.578988368" observedRunningTime="2025-12-02 13:57:49.72943597 +0000 UTC m=+915.145249861" watchObservedRunningTime="2025-12-02 13:57:49.733352389 +0000 UTC m=+915.149166290" Dec 02 13:57:50 crc kubenswrapper[4900]: I1202 13:57:50.354322 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-c4q7g" Dec 02 13:57:51 crc kubenswrapper[4900]: I1202 13:57:51.652694 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:51 crc kubenswrapper[4900]: I1202 13:57:51.695269 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-n75f5" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.153965 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh"] Dec 02 13:57:52 crc kubenswrapper[4900]: E1202 13:57:52.154195 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="registry-server" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.154206 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="registry-server" Dec 02 13:57:52 crc kubenswrapper[4900]: E1202 13:57:52.154218 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="extract-utilities" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.154225 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="extract-utilities" Dec 02 13:57:52 crc kubenswrapper[4900]: E1202 13:57:52.154236 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="extract-content" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.154244 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="extract-content" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.154340 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c1be85-3b50-400a-9b15-10f599f378ac" containerName="registry-server" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.155096 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.156774 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.163366 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh"] Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.258951 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.259023 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.259070 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5qb\" (UniqueName: \"kubernetes.io/projected/d5eadb57-166f-40ac-aa67-845abc3e919f-kube-api-access-gc5qb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.361452 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.361592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.361677 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5qb\" (UniqueName: \"kubernetes.io/projected/d5eadb57-166f-40ac-aa67-845abc3e919f-kube-api-access-gc5qb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.362102 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.362428 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.396183 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5qb\" (UniqueName: \"kubernetes.io/projected/d5eadb57-166f-40ac-aa67-845abc3e919f-kube-api-access-gc5qb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.472723 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:57:52 crc kubenswrapper[4900]: I1202 13:57:52.748524 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh"] Dec 02 13:57:52 crc kubenswrapper[4900]: W1202 13:57:52.757540 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5eadb57_166f_40ac_aa67_845abc3e919f.slice/crio-b45dee3a6a0b23212b7a700c572cf80eaeae0bc54094bed980d663cf64a2a7f5 WatchSource:0}: Error finding container b45dee3a6a0b23212b7a700c572cf80eaeae0bc54094bed980d663cf64a2a7f5: Status 404 returned error can't find the container with id b45dee3a6a0b23212b7a700c572cf80eaeae0bc54094bed980d663cf64a2a7f5 Dec 02 13:57:53 crc kubenswrapper[4900]: I1202 13:57:53.728037 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerID="4646af7faee1e3f217ea661676eafe0d68313f98ffa425cdf1868a1a3633779c" exitCode=0 Dec 02 13:57:53 crc kubenswrapper[4900]: I1202 13:57:53.728122 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" event={"ID":"d5eadb57-166f-40ac-aa67-845abc3e919f","Type":"ContainerDied","Data":"4646af7faee1e3f217ea661676eafe0d68313f98ffa425cdf1868a1a3633779c"} Dec 02 13:57:53 crc kubenswrapper[4900]: I1202 13:57:53.728315 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" event={"ID":"d5eadb57-166f-40ac-aa67-845abc3e919f","Type":"ContainerStarted","Data":"b45dee3a6a0b23212b7a700c572cf80eaeae0bc54094bed980d663cf64a2a7f5"} Dec 02 13:57:57 crc kubenswrapper[4900]: I1202 13:57:57.250529 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-qpbtr" Dec 02 13:57:57 crc kubenswrapper[4900]: I1202 13:57:57.437254 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-4w4nt" Dec 02 13:57:59 crc kubenswrapper[4900]: I1202 13:57:59.774884 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" event={"ID":"d5eadb57-166f-40ac-aa67-845abc3e919f","Type":"ContainerStarted","Data":"0d6bd51e8a1123c039152a07381af06fa861af8c8210f8e84c5beece6df1b6da"} Dec 02 13:58:00 crc kubenswrapper[4900]: I1202 13:58:00.785630 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerID="0d6bd51e8a1123c039152a07381af06fa861af8c8210f8e84c5beece6df1b6da" exitCode=0 Dec 02 13:58:00 crc kubenswrapper[4900]: I1202 13:58:00.785729 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" event={"ID":"d5eadb57-166f-40ac-aa67-845abc3e919f","Type":"ContainerDied","Data":"0d6bd51e8a1123c039152a07381af06fa861af8c8210f8e84c5beece6df1b6da"} Dec 02 13:58:01 crc kubenswrapper[4900]: I1202 13:58:01.797042 4900 generic.go:334] "Generic (PLEG): container finished" podID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerID="2f7923c46393114a53a650f0740bb6ee231b136b08d89f258a025c4e753267a5" exitCode=0 Dec 02 13:58:01 crc kubenswrapper[4900]: I1202 13:58:01.797293 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" event={"ID":"d5eadb57-166f-40ac-aa67-845abc3e919f","Type":"ContainerDied","Data":"2f7923c46393114a53a650f0740bb6ee231b136b08d89f258a025c4e753267a5"} Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.134481 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.227808 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-bundle\") pod \"d5eadb57-166f-40ac-aa67-845abc3e919f\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.227884 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc5qb\" (UniqueName: \"kubernetes.io/projected/d5eadb57-166f-40ac-aa67-845abc3e919f-kube-api-access-gc5qb\") pod \"d5eadb57-166f-40ac-aa67-845abc3e919f\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.227905 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-util\") pod \"d5eadb57-166f-40ac-aa67-845abc3e919f\" (UID: \"d5eadb57-166f-40ac-aa67-845abc3e919f\") " Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.228889 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-bundle" (OuterVolumeSpecName: "bundle") pod "d5eadb57-166f-40ac-aa67-845abc3e919f" (UID: "d5eadb57-166f-40ac-aa67-845abc3e919f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.237139 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5eadb57-166f-40ac-aa67-845abc3e919f-kube-api-access-gc5qb" (OuterVolumeSpecName: "kube-api-access-gc5qb") pod "d5eadb57-166f-40ac-aa67-845abc3e919f" (UID: "d5eadb57-166f-40ac-aa67-845abc3e919f"). InnerVolumeSpecName "kube-api-access-gc5qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.241771 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-util" (OuterVolumeSpecName: "util") pod "d5eadb57-166f-40ac-aa67-845abc3e919f" (UID: "d5eadb57-166f-40ac-aa67-845abc3e919f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.330072 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc5qb\" (UniqueName: \"kubernetes.io/projected/d5eadb57-166f-40ac-aa67-845abc3e919f-kube-api-access-gc5qb\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.330131 4900 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-util\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.330151 4900 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5eadb57-166f-40ac-aa67-845abc3e919f-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.814919 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" event={"ID":"d5eadb57-166f-40ac-aa67-845abc3e919f","Type":"ContainerDied","Data":"b45dee3a6a0b23212b7a700c572cf80eaeae0bc54094bed980d663cf64a2a7f5"} Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.814999 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b45dee3a6a0b23212b7a700c572cf80eaeae0bc54094bed980d663cf64a2a7f5" Dec 02 13:58:03 crc kubenswrapper[4900]: I1202 13:58:03.815139 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh" Dec 02 13:58:06 crc kubenswrapper[4900]: I1202 13:58:06.656990 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n75f5" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.101445 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn"] Dec 02 13:58:11 crc kubenswrapper[4900]: E1202 13:58:11.102379 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="util" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.102400 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="util" Dec 02 13:58:11 crc kubenswrapper[4900]: E1202 13:58:11.102431 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="pull" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.102442 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="pull" Dec 02 13:58:11 crc kubenswrapper[4900]: E1202 13:58:11.102459 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="extract" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.102471 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="extract" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.103300 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5eadb57-166f-40ac-aa67-845abc3e919f" containerName="extract" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.103947 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.106877 4900 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-s952b" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.107191 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.107509 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.118221 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn"] Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.143202 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/168a2810-64d6-4a6b-9086-2c68b0667dac-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pcqxn\" (UID: \"168a2810-64d6-4a6b-9086-2c68b0667dac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.143294 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sgbj\" (UniqueName: \"kubernetes.io/projected/168a2810-64d6-4a6b-9086-2c68b0667dac-kube-api-access-9sgbj\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pcqxn\" (UID: \"168a2810-64d6-4a6b-9086-2c68b0667dac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.244980 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/168a2810-64d6-4a6b-9086-2c68b0667dac-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pcqxn\" (UID: \"168a2810-64d6-4a6b-9086-2c68b0667dac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.245108 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sgbj\" (UniqueName: \"kubernetes.io/projected/168a2810-64d6-4a6b-9086-2c68b0667dac-kube-api-access-9sgbj\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pcqxn\" (UID: \"168a2810-64d6-4a6b-9086-2c68b0667dac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.245501 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/168a2810-64d6-4a6b-9086-2c68b0667dac-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pcqxn\" (UID: \"168a2810-64d6-4a6b-9086-2c68b0667dac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.271947 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sgbj\" (UniqueName: \"kubernetes.io/projected/168a2810-64d6-4a6b-9086-2c68b0667dac-kube-api-access-9sgbj\") pod \"cert-manager-operator-controller-manager-64cf6dff88-pcqxn\" (UID: \"168a2810-64d6-4a6b-9086-2c68b0667dac\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.454286 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" Dec 02 13:58:11 crc kubenswrapper[4900]: I1202 13:58:11.962188 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn"] Dec 02 13:58:11 crc kubenswrapper[4900]: W1202 13:58:11.969044 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod168a2810_64d6_4a6b_9086_2c68b0667dac.slice/crio-085729b900b522e98af52d7c1c80a1623d3f83e793185c1221a729f7595a0244 WatchSource:0}: Error finding container 085729b900b522e98af52d7c1c80a1623d3f83e793185c1221a729f7595a0244: Status 404 returned error can't find the container with id 085729b900b522e98af52d7c1c80a1623d3f83e793185c1221a729f7595a0244 Dec 02 13:58:12 crc kubenswrapper[4900]: I1202 13:58:12.894982 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" event={"ID":"168a2810-64d6-4a6b-9086-2c68b0667dac","Type":"ContainerStarted","Data":"085729b900b522e98af52d7c1c80a1623d3f83e793185c1221a729f7595a0244"} Dec 02 13:58:15 crc kubenswrapper[4900]: I1202 13:58:15.116181 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:58:15 crc kubenswrapper[4900]: I1202 13:58:15.116236 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:58:20 crc kubenswrapper[4900]: I1202 13:58:20.959725 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" event={"ID":"168a2810-64d6-4a6b-9086-2c68b0667dac","Type":"ContainerStarted","Data":"f21d7a41e2e3e37df1ff1920460bfc59386acb92dba2567017274b34994fe111"} Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.836258 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-pcqxn" podStartSLOduration=3.943635057 podStartE2EDuration="11.836241897s" podCreationTimestamp="2025-12-02 13:58:11 +0000 UTC" firstStartedPulling="2025-12-02 13:58:11.972693244 +0000 UTC m=+937.388507105" lastFinishedPulling="2025-12-02 13:58:19.865300084 +0000 UTC m=+945.281113945" observedRunningTime="2025-12-02 13:58:21.006365078 +0000 UTC m=+946.422178969" watchObservedRunningTime="2025-12-02 13:58:22.836241897 +0000 UTC m=+948.252055748" Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.839023 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-h4b6p"] Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.839724 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.841415 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.841435 4900 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lqz9q" Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.841450 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.849440 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-h4b6p"] Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.903188 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxbr\" (UniqueName: \"kubernetes.io/projected/acf8bd75-e23d-4fc9-99d9-0571bbe33ae3-kube-api-access-4pxbr\") pod \"cert-manager-webhook-f4fb5df64-h4b6p\" (UID: \"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:22 crc kubenswrapper[4900]: I1202 13:58:22.903515 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf8bd75-e23d-4fc9-99d9-0571bbe33ae3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-h4b6p\" (UID: \"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.004885 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pxbr\" (UniqueName: \"kubernetes.io/projected/acf8bd75-e23d-4fc9-99d9-0571bbe33ae3-kube-api-access-4pxbr\") pod \"cert-manager-webhook-f4fb5df64-h4b6p\" (UID: \"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.004977 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf8bd75-e23d-4fc9-99d9-0571bbe33ae3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-h4b6p\" (UID: \"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.029379 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acf8bd75-e23d-4fc9-99d9-0571bbe33ae3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-h4b6p\" (UID: \"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.068701 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pxbr\" (UniqueName: \"kubernetes.io/projected/acf8bd75-e23d-4fc9-99d9-0571bbe33ae3-kube-api-access-4pxbr\") pod \"cert-manager-webhook-f4fb5df64-h4b6p\" (UID: \"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.155867 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.643081 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-h4b6p"] Dec 02 13:58:23 crc kubenswrapper[4900]: I1202 13:58:23.990051 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" event={"ID":"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3","Type":"ContainerStarted","Data":"455f2579371c936aec863d4697b9721fbcf6f6fceb06a539f7b61118eca264c8"} Dec 02 13:58:25 crc kubenswrapper[4900]: I1202 13:58:25.977807 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-grnhn"] Dec 02 13:58:25 crc kubenswrapper[4900]: I1202 13:58:25.980210 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:25 crc kubenswrapper[4900]: I1202 13:58:25.982781 4900 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-z8wb6" Dec 02 13:58:25 crc kubenswrapper[4900]: I1202 13:58:25.983545 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-grnhn"] Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.060859 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14f84abe-cdad-4668-acd1-860a7bc5a9d6-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-grnhn\" (UID: \"14f84abe-cdad-4668-acd1-860a7bc5a9d6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.060906 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpf8v\" (UniqueName: \"kubernetes.io/projected/14f84abe-cdad-4668-acd1-860a7bc5a9d6-kube-api-access-cpf8v\") pod \"cert-manager-cainjector-855d9ccff4-grnhn\" (UID: \"14f84abe-cdad-4668-acd1-860a7bc5a9d6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.162368 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14f84abe-cdad-4668-acd1-860a7bc5a9d6-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-grnhn\" (UID: \"14f84abe-cdad-4668-acd1-860a7bc5a9d6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.162439 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpf8v\" (UniqueName: \"kubernetes.io/projected/14f84abe-cdad-4668-acd1-860a7bc5a9d6-kube-api-access-cpf8v\") pod \"cert-manager-cainjector-855d9ccff4-grnhn\" (UID: \"14f84abe-cdad-4668-acd1-860a7bc5a9d6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.183354 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpf8v\" (UniqueName: \"kubernetes.io/projected/14f84abe-cdad-4668-acd1-860a7bc5a9d6-kube-api-access-cpf8v\") pod \"cert-manager-cainjector-855d9ccff4-grnhn\" (UID: \"14f84abe-cdad-4668-acd1-860a7bc5a9d6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.184082 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14f84abe-cdad-4668-acd1-860a7bc5a9d6-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-grnhn\" (UID: \"14f84abe-cdad-4668-acd1-860a7bc5a9d6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.296504 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" Dec 02 13:58:26 crc kubenswrapper[4900]: I1202 13:58:26.519748 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-grnhn"] Dec 02 13:58:27 crc kubenswrapper[4900]: I1202 13:58:27.041210 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" event={"ID":"14f84abe-cdad-4668-acd1-860a7bc5a9d6","Type":"ContainerStarted","Data":"f281ead2365b5b01fbaceec4df072047fb90c997b4304844d07680b0d80bd325"} Dec 02 13:58:34 crc kubenswrapper[4900]: I1202 13:58:34.286797 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" event={"ID":"acf8bd75-e23d-4fc9-99d9-0571bbe33ae3","Type":"ContainerStarted","Data":"40a23d8b1e6340c958e25905aab7dd1ed5dce559a503c2f6c9b42342449f83ec"} Dec 02 13:58:34 crc kubenswrapper[4900]: I1202 13:58:34.287375 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:34 crc kubenswrapper[4900]: I1202 13:58:34.289112 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" event={"ID":"14f84abe-cdad-4668-acd1-860a7bc5a9d6","Type":"ContainerStarted","Data":"8d032aaebf77c710511f3076b1a8a37a40134a06993ceedbf022a0c2aa114daf"} Dec 02 13:58:34 crc kubenswrapper[4900]: I1202 13:58:34.308537 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" podStartSLOduration=2.3544744619999998 podStartE2EDuration="12.308515544s" podCreationTimestamp="2025-12-02 13:58:22 +0000 UTC" firstStartedPulling="2025-12-02 13:58:23.645264669 +0000 UTC m=+949.061078520" lastFinishedPulling="2025-12-02 13:58:33.599305751 +0000 UTC m=+959.015119602" observedRunningTime="2025-12-02 13:58:34.306401326 +0000 UTC m=+959.722215177" watchObservedRunningTime="2025-12-02 13:58:34.308515544 +0000 UTC m=+959.724329425" Dec 02 13:58:34 crc kubenswrapper[4900]: I1202 13:58:34.325899 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-grnhn" podStartSLOduration=2.2945880069999998 podStartE2EDuration="9.325874408s" podCreationTimestamp="2025-12-02 13:58:25 +0000 UTC" firstStartedPulling="2025-12-02 13:58:26.529889729 +0000 UTC m=+951.945703580" lastFinishedPulling="2025-12-02 13:58:33.56117613 +0000 UTC m=+958.976989981" observedRunningTime="2025-12-02 13:58:34.322483453 +0000 UTC m=+959.738297304" watchObservedRunningTime="2025-12-02 13:58:34.325874408 +0000 UTC m=+959.741688299" Dec 02 13:58:38 crc kubenswrapper[4900]: I1202 13:58:38.159442 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-h4b6p" Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.751121 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-mv79b"] Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.754964 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.758617 4900 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-v5mp5" Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.767696 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-mv79b"] Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.892484 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmkq\" (UniqueName: \"kubernetes.io/projected/0b8dc943-2813-4eb2-8c2e-741f60ba09df-kube-api-access-thmkq\") pod \"cert-manager-86cb77c54b-mv79b\" (UID: \"0b8dc943-2813-4eb2-8c2e-741f60ba09df\") " pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.892737 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b8dc943-2813-4eb2-8c2e-741f60ba09df-bound-sa-token\") pod \"cert-manager-86cb77c54b-mv79b\" (UID: \"0b8dc943-2813-4eb2-8c2e-741f60ba09df\") " pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.994198 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmkq\" (UniqueName: \"kubernetes.io/projected/0b8dc943-2813-4eb2-8c2e-741f60ba09df-kube-api-access-thmkq\") pod \"cert-manager-86cb77c54b-mv79b\" (UID: \"0b8dc943-2813-4eb2-8c2e-741f60ba09df\") " pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:41 crc kubenswrapper[4900]: I1202 13:58:41.994342 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b8dc943-2813-4eb2-8c2e-741f60ba09df-bound-sa-token\") pod \"cert-manager-86cb77c54b-mv79b\" (UID: \"0b8dc943-2813-4eb2-8c2e-741f60ba09df\") " pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:42 crc kubenswrapper[4900]: I1202 13:58:42.028193 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmkq\" (UniqueName: \"kubernetes.io/projected/0b8dc943-2813-4eb2-8c2e-741f60ba09df-kube-api-access-thmkq\") pod \"cert-manager-86cb77c54b-mv79b\" (UID: \"0b8dc943-2813-4eb2-8c2e-741f60ba09df\") " pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:42 crc kubenswrapper[4900]: I1202 13:58:42.028242 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b8dc943-2813-4eb2-8c2e-741f60ba09df-bound-sa-token\") pod \"cert-manager-86cb77c54b-mv79b\" (UID: \"0b8dc943-2813-4eb2-8c2e-741f60ba09df\") " pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:42 crc kubenswrapper[4900]: I1202 13:58:42.117356 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-mv79b" Dec 02 13:58:42 crc kubenswrapper[4900]: I1202 13:58:42.349970 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-mv79b"] Dec 02 13:58:43 crc kubenswrapper[4900]: I1202 13:58:43.364747 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-mv79b" event={"ID":"0b8dc943-2813-4eb2-8c2e-741f60ba09df","Type":"ContainerStarted","Data":"c93f7539888f91c8476b0b548b4ed3cc2c43680addb5e667fd41b6442c802879"} Dec 02 13:58:43 crc kubenswrapper[4900]: I1202 13:58:43.365115 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-mv79b" event={"ID":"0b8dc943-2813-4eb2-8c2e-741f60ba09df","Type":"ContainerStarted","Data":"55750fa69ceef1820ef2fdfbec7809726c6db9cc018bbec5404886e48120175c"} Dec 02 13:58:43 crc kubenswrapper[4900]: I1202 13:58:43.391428 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-mv79b" podStartSLOduration=2.391387819 podStartE2EDuration="2.391387819s" podCreationTimestamp="2025-12-02 13:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:58:43.389554568 +0000 UTC m=+968.805368449" watchObservedRunningTime="2025-12-02 13:58:43.391387819 +0000 UTC m=+968.807201700" Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.116399 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.116488 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.116552 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.117424 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96fc286beb52d1fa09b32c5aa1607bec1c64d198ad304a0191d978063a0b9ab5"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.117503 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://96fc286beb52d1fa09b32c5aa1607bec1c64d198ad304a0191d978063a0b9ab5" gracePeriod=600 Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.388953 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="96fc286beb52d1fa09b32c5aa1607bec1c64d198ad304a0191d978063a0b9ab5" exitCode=0 Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.389042 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"96fc286beb52d1fa09b32c5aa1607bec1c64d198ad304a0191d978063a0b9ab5"} Dec 02 13:58:45 crc kubenswrapper[4900]: I1202 13:58:45.389494 4900 scope.go:117] "RemoveContainer" containerID="a09a1c5505ad87f53094560a2e55e53cf8b4e88f885e7ed1b8c3af7bddb65c71" Dec 02 13:58:46 crc kubenswrapper[4900]: I1202 13:58:46.403564 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"b6f7e930d50720476a444b744878daf723fcb619125b830c5f6dce6cf097c072"} Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.357350 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wpccf"] Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.359285 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.361814 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.362400 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9bktn" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.362739 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.397016 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wpccf"] Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.502004 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxrz\" (UniqueName: \"kubernetes.io/projected/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd-kube-api-access-jtxrz\") pod \"openstack-operator-index-wpccf\" (UID: \"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd\") " pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.603952 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxrz\" (UniqueName: \"kubernetes.io/projected/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd-kube-api-access-jtxrz\") pod \"openstack-operator-index-wpccf\" (UID: \"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd\") " pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.622688 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxrz\" (UniqueName: \"kubernetes.io/projected/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd-kube-api-access-jtxrz\") pod \"openstack-operator-index-wpccf\" (UID: \"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd\") " pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:58:52 crc kubenswrapper[4900]: I1202 13:58:52.694299 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:58:53 crc kubenswrapper[4900]: I1202 13:58:53.191312 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wpccf"] Dec 02 13:58:53 crc kubenswrapper[4900]: W1202 13:58:53.203375 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2b8df8_37f2_42a8_95e7_f39d1d710ebd.slice/crio-ff1c13cdf3dcaedd3405bf287c56172151fc38056ed199868a1aa1ea4d6a208c WatchSource:0}: Error finding container ff1c13cdf3dcaedd3405bf287c56172151fc38056ed199868a1aa1ea4d6a208c: Status 404 returned error can't find the container with id ff1c13cdf3dcaedd3405bf287c56172151fc38056ed199868a1aa1ea4d6a208c Dec 02 13:58:53 crc kubenswrapper[4900]: I1202 13:58:53.462475 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wpccf" event={"ID":"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd","Type":"ContainerStarted","Data":"ff1c13cdf3dcaedd3405bf287c56172151fc38056ed199868a1aa1ea4d6a208c"} Dec 02 13:58:55 crc kubenswrapper[4900]: I1202 13:58:55.722104 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wpccf"] Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.336073 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hz9dn"] Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.337871 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.343964 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hz9dn"] Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.387240 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wrb\" (UniqueName: \"kubernetes.io/projected/c9f91c8d-4840-4c3e-9a19-2e9e64a76f60-kube-api-access-l2wrb\") pod \"openstack-operator-index-hz9dn\" (UID: \"c9f91c8d-4840-4c3e-9a19-2e9e64a76f60\") " pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.488629 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wrb\" (UniqueName: \"kubernetes.io/projected/c9f91c8d-4840-4c3e-9a19-2e9e64a76f60-kube-api-access-l2wrb\") pod \"openstack-operator-index-hz9dn\" (UID: \"c9f91c8d-4840-4c3e-9a19-2e9e64a76f60\") " pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.524736 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wrb\" (UniqueName: \"kubernetes.io/projected/c9f91c8d-4840-4c3e-9a19-2e9e64a76f60-kube-api-access-l2wrb\") pod \"openstack-operator-index-hz9dn\" (UID: \"c9f91c8d-4840-4c3e-9a19-2e9e64a76f60\") " pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:58:56 crc kubenswrapper[4900]: I1202 13:58:56.661934 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:58:57 crc kubenswrapper[4900]: I1202 13:58:57.928068 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hz9dn"] Dec 02 13:58:59 crc kubenswrapper[4900]: I1202 13:58:59.518001 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hz9dn" event={"ID":"c9f91c8d-4840-4c3e-9a19-2e9e64a76f60","Type":"ContainerStarted","Data":"e950548503cb3d13ae5657ef8becfc99da42eea288cdd794b066a0629de5df8c"} Dec 02 13:58:59 crc kubenswrapper[4900]: I1202 13:58:59.518918 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hz9dn" event={"ID":"c9f91c8d-4840-4c3e-9a19-2e9e64a76f60","Type":"ContainerStarted","Data":"e409d3be12a4eca58c3c9f2e6501fd341a62d33fa1fac1bd61283cd776cd0220"} Dec 02 13:58:59 crc kubenswrapper[4900]: I1202 13:58:59.521047 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wpccf" event={"ID":"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd","Type":"ContainerStarted","Data":"ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909"} Dec 02 13:58:59 crc kubenswrapper[4900]: I1202 13:58:59.521279 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wpccf" podUID="cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" containerName="registry-server" containerID="cri-o://ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909" gracePeriod=2 Dec 02 13:58:59 crc kubenswrapper[4900]: I1202 13:58:59.553716 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hz9dn" podStartSLOduration=3.481772342 podStartE2EDuration="3.553677904s" podCreationTimestamp="2025-12-02 13:58:56 +0000 UTC" firstStartedPulling="2025-12-02 13:58:58.821038319 +0000 UTC m=+984.236852180" lastFinishedPulling="2025-12-02 13:58:58.892943891 +0000 UTC m=+984.308757742" observedRunningTime="2025-12-02 13:58:59.541367901 +0000 UTC m=+984.957181762" watchObservedRunningTime="2025-12-02 13:58:59.553677904 +0000 UTC m=+984.969491765" Dec 02 13:58:59 crc kubenswrapper[4900]: I1202 13:58:59.588118 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wpccf" podStartSLOduration=1.969250349 podStartE2EDuration="7.588087472s" podCreationTimestamp="2025-12-02 13:58:52 +0000 UTC" firstStartedPulling="2025-12-02 13:58:53.207127223 +0000 UTC m=+978.622941074" lastFinishedPulling="2025-12-02 13:58:58.825964336 +0000 UTC m=+984.241778197" observedRunningTime="2025-12-02 13:58:59.576596522 +0000 UTC m=+984.992410413" watchObservedRunningTime="2025-12-02 13:58:59.588087472 +0000 UTC m=+985.003901353" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.435283 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.533508 4900 generic.go:334] "Generic (PLEG): container finished" podID="cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" containerID="ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909" exitCode=0 Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.533600 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wpccf" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.533713 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wpccf" event={"ID":"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd","Type":"ContainerDied","Data":"ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909"} Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.533810 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wpccf" event={"ID":"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd","Type":"ContainerDied","Data":"ff1c13cdf3dcaedd3405bf287c56172151fc38056ed199868a1aa1ea4d6a208c"} Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.533850 4900 scope.go:117] "RemoveContainer" containerID="ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.555060 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtxrz\" (UniqueName: \"kubernetes.io/projected/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd-kube-api-access-jtxrz\") pod \"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd\" (UID: \"cb2b8df8-37f2-42a8-95e7-f39d1d710ebd\") " Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.562563 4900 scope.go:117] "RemoveContainer" containerID="ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.565144 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd-kube-api-access-jtxrz" (OuterVolumeSpecName: "kube-api-access-jtxrz") pod "cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" (UID: "cb2b8df8-37f2-42a8-95e7-f39d1d710ebd"). InnerVolumeSpecName "kube-api-access-jtxrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:59:00 crc kubenswrapper[4900]: E1202 13:59:00.565242 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909\": container with ID starting with ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909 not found: ID does not exist" containerID="ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.565276 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909"} err="failed to get container status \"ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909\": rpc error: code = NotFound desc = could not find container \"ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909\": container with ID starting with ea8d6af160e06b43e17654fdeff853c1e8897a2bcffa32a7e47070ffcbe4a909 not found: ID does not exist" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.656536 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtxrz\" (UniqueName: \"kubernetes.io/projected/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd-kube-api-access-jtxrz\") on node \"crc\" DevicePath \"\"" Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.885301 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wpccf"] Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.894320 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-wpccf"] Dec 02 13:59:00 crc kubenswrapper[4900]: I1202 13:59:00.922777 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" path="/var/lib/kubelet/pods/cb2b8df8-37f2-42a8-95e7-f39d1d710ebd/volumes" Dec 02 13:59:06 crc kubenswrapper[4900]: I1202 13:59:06.662766 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:59:06 crc kubenswrapper[4900]: I1202 13:59:06.663374 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:59:06 crc kubenswrapper[4900]: I1202 13:59:06.707712 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:59:07 crc kubenswrapper[4900]: I1202 13:59:07.624986 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hz9dn" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.588270 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7"] Dec 02 13:59:08 crc kubenswrapper[4900]: E1202 13:59:08.588951 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" containerName="registry-server" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.588974 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" containerName="registry-server" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.589185 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2b8df8-37f2-42a8-95e7-f39d1d710ebd" containerName="registry-server" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.590667 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.594330 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-x77n7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.612612 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7"] Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.708876 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-util\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.709110 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnh46\" (UniqueName: \"kubernetes.io/projected/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-kube-api-access-tnh46\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.709208 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-bundle\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.811304 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-util\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.811409 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnh46\" (UniqueName: \"kubernetes.io/projected/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-kube-api-access-tnh46\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.811482 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-bundle\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.812100 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-util\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.812187 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-bundle\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.832854 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnh46\" (UniqueName: \"kubernetes.io/projected/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-kube-api-access-tnh46\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:08 crc kubenswrapper[4900]: I1202 13:59:08.953276 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:09 crc kubenswrapper[4900]: I1202 13:59:09.455541 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7"] Dec 02 13:59:09 crc kubenswrapper[4900]: I1202 13:59:09.603073 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" event={"ID":"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18","Type":"ContainerStarted","Data":"97e855bb047fe58b0cfbe91ce735efd10899b0d82662801952af187bc704590b"} Dec 02 13:59:11 crc kubenswrapper[4900]: I1202 13:59:11.622120 4900 generic.go:334] "Generic (PLEG): container finished" podID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerID="a2e13d344f229b7588819571035d9112f6337ae8fe06d760ad71ab5489fbb060" exitCode=0 Dec 02 13:59:11 crc kubenswrapper[4900]: I1202 13:59:11.622191 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" event={"ID":"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18","Type":"ContainerDied","Data":"a2e13d344f229b7588819571035d9112f6337ae8fe06d760ad71ab5489fbb060"} Dec 02 13:59:12 crc kubenswrapper[4900]: I1202 13:59:12.636494 4900 generic.go:334] "Generic (PLEG): container finished" podID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerID="165f10012ac1d86b39a984c34385cdd689facb219e96c106deaec0b0a565a308" exitCode=0 Dec 02 13:59:12 crc kubenswrapper[4900]: I1202 13:59:12.636563 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" event={"ID":"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18","Type":"ContainerDied","Data":"165f10012ac1d86b39a984c34385cdd689facb219e96c106deaec0b0a565a308"} Dec 02 13:59:13 crc kubenswrapper[4900]: I1202 13:59:13.649569 4900 generic.go:334] "Generic (PLEG): container finished" podID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerID="0bf2296223b1499c92176cc98dd3cd264e061dd39c60f31ea64ebc88bc9f745a" exitCode=0 Dec 02 13:59:13 crc kubenswrapper[4900]: I1202 13:59:13.649666 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" event={"ID":"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18","Type":"ContainerDied","Data":"0bf2296223b1499c92176cc98dd3cd264e061dd39c60f31ea64ebc88bc9f745a"} Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.010204 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.102714 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnh46\" (UniqueName: \"kubernetes.io/projected/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-kube-api-access-tnh46\") pod \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.102835 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-util\") pod \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.102925 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-bundle\") pod \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\" (UID: \"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18\") " Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.103573 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-bundle" (OuterVolumeSpecName: "bundle") pod "1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" (UID: "1f6d0f83-6d8d-433f-a31a-2f204c4c8c18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.108615 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-kube-api-access-tnh46" (OuterVolumeSpecName: "kube-api-access-tnh46") pod "1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" (UID: "1f6d0f83-6d8d-433f-a31a-2f204c4c8c18"). InnerVolumeSpecName "kube-api-access-tnh46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.115382 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-util" (OuterVolumeSpecName: "util") pod "1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" (UID: "1f6d0f83-6d8d-433f-a31a-2f204c4c8c18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.204893 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnh46\" (UniqueName: \"kubernetes.io/projected/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-kube-api-access-tnh46\") on node \"crc\" DevicePath \"\"" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.204924 4900 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-util\") on node \"crc\" DevicePath \"\"" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.204933 4900 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6d0f83-6d8d-433f-a31a-2f204c4c8c18-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.670437 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" event={"ID":"1f6d0f83-6d8d-433f-a31a-2f204c4c8c18","Type":"ContainerDied","Data":"97e855bb047fe58b0cfbe91ce735efd10899b0d82662801952af187bc704590b"} Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.670505 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e855bb047fe58b0cfbe91ce735efd10899b0d82662801952af187bc704590b" Dec 02 13:59:15 crc kubenswrapper[4900]: I1202 13:59:15.670528 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.247665 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk"] Dec 02 13:59:21 crc kubenswrapper[4900]: E1202 13:59:21.248583 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="pull" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.248599 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="pull" Dec 02 13:59:21 crc kubenswrapper[4900]: E1202 13:59:21.248615 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="extract" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.248625 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="extract" Dec 02 13:59:21 crc kubenswrapper[4900]: E1202 13:59:21.248660 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="util" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.248669 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="util" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.248808 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6d0f83-6d8d-433f-a31a-2f204c4c8c18" containerName="extract" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.249318 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.251181 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-wlgc5" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.283142 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk"] Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.403256 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl2k\" (UniqueName: \"kubernetes.io/projected/f5a97b31-1461-4708-a9ea-373711c869c3-kube-api-access-dcl2k\") pod \"openstack-operator-controller-operator-84d58866d9-pxwtk\" (UID: \"f5a97b31-1461-4708-a9ea-373711c869c3\") " pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.504768 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl2k\" (UniqueName: \"kubernetes.io/projected/f5a97b31-1461-4708-a9ea-373711c869c3-kube-api-access-dcl2k\") pod \"openstack-operator-controller-operator-84d58866d9-pxwtk\" (UID: \"f5a97b31-1461-4708-a9ea-373711c869c3\") " pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.527948 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl2k\" (UniqueName: \"kubernetes.io/projected/f5a97b31-1461-4708-a9ea-373711c869c3-kube-api-access-dcl2k\") pod \"openstack-operator-controller-operator-84d58866d9-pxwtk\" (UID: \"f5a97b31-1461-4708-a9ea-373711c869c3\") " pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:21 crc kubenswrapper[4900]: I1202 13:59:21.565787 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:22 crc kubenswrapper[4900]: I1202 13:59:22.155596 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk"] Dec 02 13:59:22 crc kubenswrapper[4900]: I1202 13:59:22.724179 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" event={"ID":"f5a97b31-1461-4708-a9ea-373711c869c3","Type":"ContainerStarted","Data":"10e30cc7e3a2c34838ce4d38367e89875cc2814d80a4bde4fa80f6aff0250b92"} Dec 02 13:59:28 crc kubenswrapper[4900]: I1202 13:59:28.776252 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" event={"ID":"f5a97b31-1461-4708-a9ea-373711c869c3","Type":"ContainerStarted","Data":"5928f5e83594ae36e4ad14f57eefbdc748f2cfce4a9498ba09f787aa0ecbce8c"} Dec 02 13:59:28 crc kubenswrapper[4900]: I1202 13:59:28.777213 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:28 crc kubenswrapper[4900]: I1202 13:59:28.829263 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" podStartSLOduration=2.076813446 podStartE2EDuration="7.829246169s" podCreationTimestamp="2025-12-02 13:59:21 +0000 UTC" firstStartedPulling="2025-12-02 13:59:22.163413529 +0000 UTC m=+1007.579227370" lastFinishedPulling="2025-12-02 13:59:27.915846212 +0000 UTC m=+1013.331660093" observedRunningTime="2025-12-02 13:59:28.82425122 +0000 UTC m=+1014.240065091" watchObservedRunningTime="2025-12-02 13:59:28.829246169 +0000 UTC m=+1014.245060030" Dec 02 13:59:41 crc kubenswrapper[4900]: I1202 13:59:41.569599 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-pxwtk" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.113142 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.114784 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.119763 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-44txq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.133167 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.134207 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.137066 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.139022 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-llxp6" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.150406 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.154758 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.155848 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.158896 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.159636 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.162915 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-gh65v" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.178035 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sdk6h" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.188685 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.199118 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.200231 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.204669 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4g2s4" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.215230 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.223703 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.228362 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.228823 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.230201 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-h7ssr" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.240697 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-6479q"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.241793 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.241901 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgwz\" (UniqueName: \"kubernetes.io/projected/6c51856b-78db-4067-aec4-bdbb2513d6d3-kube-api-access-7jgwz\") pod \"barbican-operator-controller-manager-7d9dfd778-cdb56\" (UID: \"6c51856b-78db-4067-aec4-bdbb2513d6d3\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.241944 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4x7\" (UniqueName: \"kubernetes.io/projected/90f13638-1211-4cdc-9c96-298ae112e911-kube-api-access-dh4x7\") pod \"cinder-operator-controller-manager-55f4dbb9b7-snhnc\" (UID: \"90f13638-1211-4cdc-9c96-298ae112e911\") " pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.242008 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjr4\" (UniqueName: \"kubernetes.io/projected/6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495-kube-api-access-gvjr4\") pod \"glance-operator-controller-manager-77987cd8cd-z9xzn\" (UID: \"6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.242083 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmrb\" (UniqueName: \"kubernetes.io/projected/60b341d0-be93-4332-8eb3-356d0a0b4ee4-kube-api-access-cvmrb\") pod \"designate-operator-controller-manager-78b4bc895b-79n4c\" (UID: \"60b341d0-be93-4332-8eb3-356d0a0b4ee4\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.261611 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.262937 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.280798 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-6479q"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.291047 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.291314 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hhn2g" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.292016 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vpbqw" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.292141 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.296546 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.308022 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zdl6q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.336901 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343306 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsffh\" (UniqueName: \"kubernetes.io/projected/2648e9ae-b5db-4196-a921-5a708baae84d-kube-api-access-zsffh\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343357 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msnlw\" (UniqueName: \"kubernetes.io/projected/a79b1912-6054-4cc9-a584-c7e3e6ca9a31-kube-api-access-msnlw\") pod \"ironic-operator-controller-manager-6c548fd776-77vc7\" (UID: \"a79b1912-6054-4cc9-a584-c7e3e6ca9a31\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343388 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmrb\" (UniqueName: \"kubernetes.io/projected/60b341d0-be93-4332-8eb3-356d0a0b4ee4-kube-api-access-cvmrb\") pod \"designate-operator-controller-manager-78b4bc895b-79n4c\" (UID: \"60b341d0-be93-4332-8eb3-356d0a0b4ee4\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6wf\" (UniqueName: \"kubernetes.io/projected/76472776-56db-440f-a0a5-5a45eaa83baa-kube-api-access-pf6wf\") pod \"heat-operator-controller-manager-5f64f6f8bb-l8dqg\" (UID: \"76472776-56db-440f-a0a5-5a45eaa83baa\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343458 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343496 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrsk\" (UniqueName: \"kubernetes.io/projected/c6afcb92-eb6f-4615-8b25-bcdc77eda80e-kube-api-access-twrsk\") pod \"horizon-operator-controller-manager-68c6d99b8f-htb7b\" (UID: \"c6afcb92-eb6f-4615-8b25-bcdc77eda80e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343531 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgwz\" (UniqueName: \"kubernetes.io/projected/6c51856b-78db-4067-aec4-bdbb2513d6d3-kube-api-access-7jgwz\") pod \"barbican-operator-controller-manager-7d9dfd778-cdb56\" (UID: \"6c51856b-78db-4067-aec4-bdbb2513d6d3\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343561 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh4x7\" (UniqueName: \"kubernetes.io/projected/90f13638-1211-4cdc-9c96-298ae112e911-kube-api-access-dh4x7\") pod \"cinder-operator-controller-manager-55f4dbb9b7-snhnc\" (UID: \"90f13638-1211-4cdc-9c96-298ae112e911\") " pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.343590 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjr4\" (UniqueName: \"kubernetes.io/projected/6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495-kube-api-access-gvjr4\") pod \"glance-operator-controller-manager-77987cd8cd-z9xzn\" (UID: \"6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.350688 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.358348 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.361711 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.362754 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.364605 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-whcnq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.364991 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.374284 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.375325 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.380315 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nldfq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.387188 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgwz\" (UniqueName: \"kubernetes.io/projected/6c51856b-78db-4067-aec4-bdbb2513d6d3-kube-api-access-7jgwz\") pod \"barbican-operator-controller-manager-7d9dfd778-cdb56\" (UID: \"6c51856b-78db-4067-aec4-bdbb2513d6d3\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.387268 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjr4\" (UniqueName: \"kubernetes.io/projected/6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495-kube-api-access-gvjr4\") pod \"glance-operator-controller-manager-77987cd8cd-z9xzn\" (UID: \"6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.399237 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh4x7\" (UniqueName: \"kubernetes.io/projected/90f13638-1211-4cdc-9c96-298ae112e911-kube-api-access-dh4x7\") pod \"cinder-operator-controller-manager-55f4dbb9b7-snhnc\" (UID: \"90f13638-1211-4cdc-9c96-298ae112e911\") " pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.401968 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.403001 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.414203 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.414577 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmrb\" (UniqueName: \"kubernetes.io/projected/60b341d0-be93-4332-8eb3-356d0a0b4ee4-kube-api-access-cvmrb\") pod \"designate-operator-controller-manager-78b4bc895b-79n4c\" (UID: \"60b341d0-be93-4332-8eb3-356d0a0b4ee4\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.416907 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-qfx6c" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.419206 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.442360 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.448464 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6wf\" (UniqueName: \"kubernetes.io/projected/76472776-56db-440f-a0a5-5a45eaa83baa-kube-api-access-pf6wf\") pod \"heat-operator-controller-manager-5f64f6f8bb-l8dqg\" (UID: \"76472776-56db-440f-a0a5-5a45eaa83baa\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.448848 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.448915 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twrsk\" (UniqueName: \"kubernetes.io/projected/c6afcb92-eb6f-4615-8b25-bcdc77eda80e-kube-api-access-twrsk\") pod \"horizon-operator-controller-manager-68c6d99b8f-htb7b\" (UID: \"c6afcb92-eb6f-4615-8b25-bcdc77eda80e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.448962 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq45c\" (UniqueName: \"kubernetes.io/projected/a6ef169a-4706-4704-bc8a-4afe5a1d4ac9-kube-api-access-hq45c\") pod \"manila-operator-controller-manager-7c79b5df47-jkzbj\" (UID: \"a6ef169a-4706-4704-bc8a-4afe5a1d4ac9\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.448992 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drsp8\" (UniqueName: \"kubernetes.io/projected/9127ec85-11f3-4526-bda2-884648292518-kube-api-access-drsp8\") pod \"keystone-operator-controller-manager-7765d96ddf-r8t7n\" (UID: \"9127ec85-11f3-4526-bda2-884648292518\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.449040 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsffh\" (UniqueName: \"kubernetes.io/projected/2648e9ae-b5db-4196-a921-5a708baae84d-kube-api-access-zsffh\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.449068 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msnlw\" (UniqueName: \"kubernetes.io/projected/a79b1912-6054-4cc9-a584-c7e3e6ca9a31-kube-api-access-msnlw\") pod \"ironic-operator-controller-manager-6c548fd776-77vc7\" (UID: \"a79b1912-6054-4cc9-a584-c7e3e6ca9a31\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.449089 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqxn\" (UniqueName: \"kubernetes.io/projected/98c32660-966d-43a1-932d-4ca2af418bf5-kube-api-access-bqqxn\") pod \"mariadb-operator-controller-manager-56bbcc9d85-cwkch\" (UID: \"98c32660-966d-43a1-932d-4ca2af418bf5\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 13:59:59 crc kubenswrapper[4900]: E1202 13:59:59.449764 4900 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 13:59:59 crc kubenswrapper[4900]: E1202 13:59:59.449835 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert podName:2648e9ae-b5db-4196-a921-5a708baae84d nodeName:}" failed. No retries permitted until 2025-12-02 13:59:59.949815893 +0000 UTC m=+1045.365629744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert") pod "infra-operator-controller-manager-57548d458d-6479q" (UID: "2648e9ae-b5db-4196-a921-5a708baae84d") : secret "infra-operator-webhook-server-cert" not found Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.468218 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.488840 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.490757 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.491973 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twrsk\" (UniqueName: \"kubernetes.io/projected/c6afcb92-eb6f-4615-8b25-bcdc77eda80e-kube-api-access-twrsk\") pod \"horizon-operator-controller-manager-68c6d99b8f-htb7b\" (UID: \"c6afcb92-eb6f-4615-8b25-bcdc77eda80e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.509212 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.510438 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsffh\" (UniqueName: \"kubernetes.io/projected/2648e9ae-b5db-4196-a921-5a708baae84d-kube-api-access-zsffh\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.511180 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msnlw\" (UniqueName: \"kubernetes.io/projected/a79b1912-6054-4cc9-a584-c7e3e6ca9a31-kube-api-access-msnlw\") pod \"ironic-operator-controller-manager-6c548fd776-77vc7\" (UID: \"a79b1912-6054-4cc9-a584-c7e3e6ca9a31\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.512176 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.513920 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6wf\" (UniqueName: \"kubernetes.io/projected/76472776-56db-440f-a0a5-5a45eaa83baa-kube-api-access-pf6wf\") pod \"heat-operator-controller-manager-5f64f6f8bb-l8dqg\" (UID: \"76472776-56db-440f-a0a5-5a45eaa83baa\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.529109 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.529569 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sfdn9" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.530319 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.531008 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.538536 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nk87h" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.547038 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.550008 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.550500 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqxn\" (UniqueName: \"kubernetes.io/projected/98c32660-966d-43a1-932d-4ca2af418bf5-kube-api-access-bqqxn\") pod \"mariadb-operator-controller-manager-56bbcc9d85-cwkch\" (UID: \"98c32660-966d-43a1-932d-4ca2af418bf5\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.550537 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v5j8\" (UniqueName: \"kubernetes.io/projected/b45635de-34ee-4361-b519-a12f95d3849b-kube-api-access-8v5j8\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ksgnh\" (UID: \"b45635de-34ee-4361-b519-a12f95d3849b\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.551011 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq45c\" (UniqueName: \"kubernetes.io/projected/a6ef169a-4706-4704-bc8a-4afe5a1d4ac9-kube-api-access-hq45c\") pod \"manila-operator-controller-manager-7c79b5df47-jkzbj\" (UID: \"a6ef169a-4706-4704-bc8a-4afe5a1d4ac9\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.551040 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drsp8\" (UniqueName: \"kubernetes.io/projected/9127ec85-11f3-4526-bda2-884648292518-kube-api-access-drsp8\") pod \"keystone-operator-controller-manager-7765d96ddf-r8t7n\" (UID: \"9127ec85-11f3-4526-bda2-884648292518\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.568109 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqxn\" (UniqueName: \"kubernetes.io/projected/98c32660-966d-43a1-932d-4ca2af418bf5-kube-api-access-bqqxn\") pod \"mariadb-operator-controller-manager-56bbcc9d85-cwkch\" (UID: \"98c32660-966d-43a1-932d-4ca2af418bf5\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.568591 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drsp8\" (UniqueName: \"kubernetes.io/projected/9127ec85-11f3-4526-bda2-884648292518-kube-api-access-drsp8\") pod \"keystone-operator-controller-manager-7765d96ddf-r8t7n\" (UID: \"9127ec85-11f3-4526-bda2-884648292518\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.568639 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.573299 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq45c\" (UniqueName: \"kubernetes.io/projected/a6ef169a-4706-4704-bc8a-4afe5a1d4ac9-kube-api-access-hq45c\") pod \"manila-operator-controller-manager-7c79b5df47-jkzbj\" (UID: \"a6ef169a-4706-4704-bc8a-4afe5a1d4ac9\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.581320 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.592220 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.593913 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.598331 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-89gfp" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.599405 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.617751 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.618884 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.621525 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-p88ck" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.631451 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.631977 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.633048 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.637038 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6pnlp" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.650506 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.651991 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v5j8\" (UniqueName: \"kubernetes.io/projected/b45635de-34ee-4361-b519-a12f95d3849b-kube-api-access-8v5j8\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ksgnh\" (UID: \"b45635de-34ee-4361-b519-a12f95d3849b\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.652046 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjwc\" (UniqueName: \"kubernetes.io/projected/6a464001-7dd2-4485-ba44-3c1dcd166c05-kube-api-access-4fjwc\") pod \"nova-operator-controller-manager-697bc559fc-7h9xf\" (UID: \"6a464001-7dd2-4485-ba44-3c1dcd166c05\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.652096 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnd42\" (UniqueName: \"kubernetes.io/projected/f67eee76-7e8d-4b82-aa0a-b5a8600de493-kube-api-access-nnd42\") pod \"octavia-operator-controller-manager-998648c74-vk8cv\" (UID: \"f67eee76-7e8d-4b82-aa0a-b5a8600de493\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.652119 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.652162 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpd68\" (UniqueName: \"kubernetes.io/projected/58fb2457-4246-4898-98d3-c33292975d8e-kube-api-access-hpd68\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.652478 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.654990 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6jmmh" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.671094 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.678122 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v5j8\" (UniqueName: \"kubernetes.io/projected/b45635de-34ee-4361-b519-a12f95d3849b-kube-api-access-8v5j8\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ksgnh\" (UID: \"b45635de-34ee-4361-b519-a12f95d3849b\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.684525 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.695277 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.702629 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.721472 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.722763 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.732407 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zd4q5" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.732807 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753410 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjwc\" (UniqueName: \"kubernetes.io/projected/6a464001-7dd2-4485-ba44-3c1dcd166c05-kube-api-access-4fjwc\") pod \"nova-operator-controller-manager-697bc559fc-7h9xf\" (UID: \"6a464001-7dd2-4485-ba44-3c1dcd166c05\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753480 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcbrj\" (UniqueName: \"kubernetes.io/projected/e370b52c-c5be-4584-ada8-183e5d79e1f5-kube-api-access-xcbrj\") pod \"swift-operator-controller-manager-5f8c65bbfc-4klhn\" (UID: \"e370b52c-c5be-4584-ada8-183e5d79e1f5\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753506 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnd42\" (UniqueName: \"kubernetes.io/projected/f67eee76-7e8d-4b82-aa0a-b5a8600de493-kube-api-access-nnd42\") pod \"octavia-operator-controller-manager-998648c74-vk8cv\" (UID: \"f67eee76-7e8d-4b82-aa0a-b5a8600de493\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753531 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753552 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2xrh\" (UniqueName: \"kubernetes.io/projected/7901b678-edf0-4df9-8896-c596d2eab813-kube-api-access-s2xrh\") pod \"ovn-operator-controller-manager-b6456fdb6-78lhr\" (UID: \"7901b678-edf0-4df9-8896-c596d2eab813\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753583 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpd68\" (UniqueName: \"kubernetes.io/projected/58fb2457-4246-4898-98d3-c33292975d8e-kube-api-access-hpd68\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.753604 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqj2j\" (UniqueName: \"kubernetes.io/projected/a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18-kube-api-access-bqj2j\") pod \"placement-operator-controller-manager-78f8948974-hh8s2\" (UID: \"a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 13:59:59 crc kubenswrapper[4900]: E1202 13:59:59.753891 4900 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 13:59:59 crc kubenswrapper[4900]: E1202 13:59:59.753949 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert podName:58fb2457-4246-4898-98d3-c33292975d8e nodeName:}" failed. No retries permitted until 2025-12-02 14:00:00.253931699 +0000 UTC m=+1045.669745550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" (UID: "58fb2457-4246-4898-98d3-c33292975d8e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.770891 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.772053 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.775690 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nkxsl" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.779583 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.783490 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpd68\" (UniqueName: \"kubernetes.io/projected/58fb2457-4246-4898-98d3-c33292975d8e-kube-api-access-hpd68\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.793181 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnd42\" (UniqueName: \"kubernetes.io/projected/f67eee76-7e8d-4b82-aa0a-b5a8600de493-kube-api-access-nnd42\") pod \"octavia-operator-controller-manager-998648c74-vk8cv\" (UID: \"f67eee76-7e8d-4b82-aa0a-b5a8600de493\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.793560 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjwc\" (UniqueName: \"kubernetes.io/projected/6a464001-7dd2-4485-ba44-3c1dcd166c05-kube-api-access-4fjwc\") pod \"nova-operator-controller-manager-697bc559fc-7h9xf\" (UID: \"6a464001-7dd2-4485-ba44-3c1dcd166c05\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.817347 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.818456 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.829904 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.836109 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gg6wj" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.852684 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.853604 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.854695 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbw5\" (UniqueName: \"kubernetes.io/projected/b2312412-3b86-40c1-9cf8-32d59d3a3a4e-kube-api-access-4sbw5\") pod \"test-operator-controller-manager-5854674fcc-h6jdq\" (UID: \"b2312412-3b86-40c1-9cf8-32d59d3a3a4e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.854745 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcbrj\" (UniqueName: \"kubernetes.io/projected/e370b52c-c5be-4584-ada8-183e5d79e1f5-kube-api-access-xcbrj\") pod \"swift-operator-controller-manager-5f8c65bbfc-4klhn\" (UID: \"e370b52c-c5be-4584-ada8-183e5d79e1f5\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.854796 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2xrh\" (UniqueName: \"kubernetes.io/projected/7901b678-edf0-4df9-8896-c596d2eab813-kube-api-access-s2xrh\") pod \"ovn-operator-controller-manager-b6456fdb6-78lhr\" (UID: \"7901b678-edf0-4df9-8896-c596d2eab813\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.854842 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4km\" (UniqueName: \"kubernetes.io/projected/da380dce-d4c5-41ed-8273-648f6ad79d43-kube-api-access-st4km\") pod \"telemetry-operator-controller-manager-76cc84c6bb-s8q5z\" (UID: \"da380dce-d4c5-41ed-8273-648f6ad79d43\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.854861 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqj2j\" (UniqueName: \"kubernetes.io/projected/a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18-kube-api-access-bqj2j\") pod \"placement-operator-controller-manager-78f8948974-hh8s2\" (UID: \"a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.855262 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.855910 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4hk9p" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.855972 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.859316 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.859880 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.860223 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.871783 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.895089 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.896695 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2xrh\" (UniqueName: \"kubernetes.io/projected/7901b678-edf0-4df9-8896-c596d2eab813-kube-api-access-s2xrh\") pod \"ovn-operator-controller-manager-b6456fdb6-78lhr\" (UID: \"7901b678-edf0-4df9-8896-c596d2eab813\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.909262 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqj2j\" (UniqueName: \"kubernetes.io/projected/a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18-kube-api-access-bqj2j\") pod \"placement-operator-controller-manager-78f8948974-hh8s2\" (UID: \"a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.910814 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.911669 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcbrj\" (UniqueName: \"kubernetes.io/projected/e370b52c-c5be-4584-ada8-183e5d79e1f5-kube-api-access-xcbrj\") pod \"swift-operator-controller-manager-5f8c65bbfc-4klhn\" (UID: \"e370b52c-c5be-4584-ada8-183e5d79e1f5\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.929008 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.930130 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.932032 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd"] Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.932787 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ft8tc" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.952657 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957196 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957294 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbw5\" (UniqueName: \"kubernetes.io/projected/b2312412-3b86-40c1-9cf8-32d59d3a3a4e-kube-api-access-4sbw5\") pod \"test-operator-controller-manager-5854674fcc-h6jdq\" (UID: \"b2312412-3b86-40c1-9cf8-32d59d3a3a4e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957413 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957448 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st4km\" (UniqueName: \"kubernetes.io/projected/da380dce-d4c5-41ed-8273-648f6ad79d43-kube-api-access-st4km\") pod \"telemetry-operator-controller-manager-76cc84c6bb-s8q5z\" (UID: \"da380dce-d4c5-41ed-8273-648f6ad79d43\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957482 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthcn\" (UniqueName: \"kubernetes.io/projected/eab7da61-f654-4f78-8dfa-4ede5002df86-kube-api-access-sthcn\") pod \"watcher-operator-controller-manager-769dc69bc-2pspv\" (UID: \"eab7da61-f654-4f78-8dfa-4ede5002df86\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957529 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.957602 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqsq7\" (UniqueName: \"kubernetes.io/projected/bb6f8bf1-8305-460b-94d3-208e68ad6f52-kube-api-access-tqsq7\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 13:59:59 crc kubenswrapper[4900]: E1202 13:59:59.959049 4900 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 13:59:59 crc kubenswrapper[4900]: E1202 13:59:59.963723 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert podName:2648e9ae-b5db-4196-a921-5a708baae84d nodeName:}" failed. No retries permitted until 2025-12-02 14:00:00.959082031 +0000 UTC m=+1046.374895882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert") pod "infra-operator-controller-manager-57548d458d-6479q" (UID: "2648e9ae-b5db-4196-a921-5a708baae84d") : secret "infra-operator-webhook-server-cert" not found Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.975120 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.976952 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st4km\" (UniqueName: \"kubernetes.io/projected/da380dce-d4c5-41ed-8273-648f6ad79d43-kube-api-access-st4km\") pod \"telemetry-operator-controller-manager-76cc84c6bb-s8q5z\" (UID: \"da380dce-d4c5-41ed-8273-648f6ad79d43\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.978518 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbw5\" (UniqueName: \"kubernetes.io/projected/b2312412-3b86-40c1-9cf8-32d59d3a3a4e-kube-api-access-4sbw5\") pod \"test-operator-controller-manager-5854674fcc-h6jdq\" (UID: \"b2312412-3b86-40c1-9cf8-32d59d3a3a4e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.983657 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 13:59:59 crc kubenswrapper[4900]: I1202 13:59:59.990255 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.052326 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.065911 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthcn\" (UniqueName: \"kubernetes.io/projected/eab7da61-f654-4f78-8dfa-4ede5002df86-kube-api-access-sthcn\") pod \"watcher-operator-controller-manager-769dc69bc-2pspv\" (UID: \"eab7da61-f654-4f78-8dfa-4ede5002df86\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.065990 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.066027 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnvl\" (UniqueName: \"kubernetes.io/projected/ceece6b3-6e91-4afc-9f75-604473b84a44-kube-api-access-8nnvl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mdwwd\" (UID: \"ceece6b3-6e91-4afc-9f75-604473b84a44\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.066056 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqsq7\" (UniqueName: \"kubernetes.io/projected/bb6f8bf1-8305-460b-94d3-208e68ad6f52-kube-api-access-tqsq7\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.066432 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.066596 4900 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.066692 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:00.566670016 +0000 UTC m=+1045.982484097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "metrics-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.066723 4900 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.074748 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:00.574712021 +0000 UTC m=+1045.990525872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.087325 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthcn\" (UniqueName: \"kubernetes.io/projected/eab7da61-f654-4f78-8dfa-4ede5002df86-kube-api-access-sthcn\") pod \"watcher-operator-controller-manager-769dc69bc-2pspv\" (UID: \"eab7da61-f654-4f78-8dfa-4ede5002df86\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.088998 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqsq7\" (UniqueName: \"kubernetes.io/projected/bb6f8bf1-8305-460b-94d3-208e68ad6f52-kube-api-access-tqsq7\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.158528 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.160004 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.163897 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.164114 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.167249 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnvl\" (UniqueName: \"kubernetes.io/projected/ceece6b3-6e91-4afc-9f75-604473b84a44-kube-api-access-8nnvl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mdwwd\" (UID: \"ceece6b3-6e91-4afc-9f75-604473b84a44\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.168500 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.189633 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnvl\" (UniqueName: \"kubernetes.io/projected/ceece6b3-6e91-4afc-9f75-604473b84a44-kube-api-access-8nnvl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mdwwd\" (UID: \"ceece6b3-6e91-4afc-9f75-604473b84a44\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.271145 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-secret-volume\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.271276 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-config-volume\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.271596 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.271786 4900 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.271867 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert podName:58fb2457-4246-4898-98d3-c33292975d8e nodeName:}" failed. No retries permitted until 2025-12-02 14:00:01.271842748 +0000 UTC m=+1046.687656629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" (UID: "58fb2457-4246-4898-98d3-c33292975d8e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.272448 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2w2s\" (UniqueName: \"kubernetes.io/projected/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-kube-api-access-p2w2s\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.297263 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.309215 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.343734 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.359916 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.373612 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-secret-volume\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.373693 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-config-volume\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.373777 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2w2s\" (UniqueName: \"kubernetes.io/projected/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-kube-api-access-p2w2s\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.375400 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-config-volume\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.377835 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-secret-volume\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.394461 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2w2s\" (UniqueName: \"kubernetes.io/projected/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-kube-api-access-p2w2s\") pod \"collect-profiles-29411400-ghj2j\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.497169 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.576733 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.576921 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.577002 4900 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.577089 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:01.577066495 +0000 UTC m=+1046.992880346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "metrics-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.577117 4900 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.577196 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:01.577175338 +0000 UTC m=+1046.992989189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.701218 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.710680 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.716177 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.720287 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.729419 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.736079 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c"] Dec 02 14:00:00 crc kubenswrapper[4900]: W1202 14:00:00.750053 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad04c0c_8b90_4f63_8ff3_8afe8f1d2495.slice/crio-1c032c376b39026382810518e938103c4e4d0f20ec6e95fec3bd7ba7806bc244 WatchSource:0}: Error finding container 1c032c376b39026382810518e938103c4e4d0f20ec6e95fec3bd7ba7806bc244: Status 404 returned error can't find the container with id 1c032c376b39026382810518e938103c4e4d0f20ec6e95fec3bd7ba7806bc244 Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.771535 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.775921 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2"] Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.779722 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh"] Dec 02 14:00:00 crc kubenswrapper[4900]: W1202 14:00:00.787213 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45635de_34ee_4361_b519_a12f95d3849b.slice/crio-b4553dd62074a190cfec8c3b0b4c833155625846b3f91986cda5ae7ad0486df5 WatchSource:0}: Error finding container b4553dd62074a190cfec8c3b0b4c833155625846b3f91986cda5ae7ad0486df5: Status 404 returned error can't find the container with id b4553dd62074a190cfec8c3b0b4c833155625846b3f91986cda5ae7ad0486df5 Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.983803 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.983993 4900 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: E1202 14:00:00.984068 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert podName:2648e9ae-b5db-4196-a921-5a708baae84d nodeName:}" failed. No retries permitted until 2025-12-02 14:00:02.984050225 +0000 UTC m=+1048.399864076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert") pod "infra-operator-controller-manager-57548d458d-6479q" (UID: "2648e9ae-b5db-4196-a921-5a708baae84d") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:00 crc kubenswrapper[4900]: I1202 14:00:00.985837 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z"] Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.043716 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq"] Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.048581 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf"] Dec 02 14:00:01 crc kubenswrapper[4900]: W1202 14:00:01.049997 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a464001_7dd2_4485_ba44_3c1dcd166c05.slice/crio-ab47859e7e78a31140b1675fcee4f9597a3c70f67dcc1d49059ebb274a1ae228 WatchSource:0}: Error finding container ab47859e7e78a31140b1675fcee4f9597a3c70f67dcc1d49059ebb274a1ae228: Status 404 returned error can't find the container with id ab47859e7e78a31140b1675fcee4f9597a3c70f67dcc1d49059ebb274a1ae228 Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.053692 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fjwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7h9xf_openstack-operators(6a464001-7dd2-4485-ba44-3c1dcd166c05): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: W1202 14:00:01.054250 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ef169a_4706_4704_bc8a_4afe5a1d4ac9.slice/crio-15849a62db52e9679c41d229f5292c1011678f2a3ad9d9769494e054e59265a0 WatchSource:0}: Error finding container 15849a62db52e9679c41d229f5292c1011678f2a3ad9d9769494e054e59265a0: Status 404 returned error can't find the container with id 15849a62db52e9679c41d229f5292c1011678f2a3ad9d9769494e054e59265a0 Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.058391 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnd42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-vk8cv_openstack-operators(f67eee76-7e8d-4b82-aa0a-b5a8600de493): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.058483 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fjwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7h9xf_openstack-operators(6a464001-7dd2-4485-ba44-3c1dcd166c05): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.058677 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hq45c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-jkzbj_openstack-operators(a6ef169a-4706-4704-bc8a-4afe5a1d4ac9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.060636 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" podUID="6a464001-7dd2-4485-ba44-3c1dcd166c05" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.060709 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcbrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-4klhn_openstack-operators(e370b52c-c5be-4584-ada8-183e5d79e1f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.061756 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hq45c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-jkzbj_openstack-operators(a6ef169a-4706-4704-bc8a-4afe5a1d4ac9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.061832 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnd42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-vk8cv_openstack-operators(f67eee76-7e8d-4b82-aa0a-b5a8600de493): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.062195 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcbrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-4klhn_openstack-operators(e370b52c-c5be-4584-ada8-183e5d79e1f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: W1202 14:00:01.062266 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7901b678_edf0_4df9_8896_c596d2eab813.slice/crio-c01d31f8a6b280e178891de3e7287d71b522a219ea25ecc784047134dd19cda4 WatchSource:0}: Error finding container c01d31f8a6b280e178891de3e7287d71b522a219ea25ecc784047134dd19cda4: Status 404 returned error can't find the container with id c01d31f8a6b280e178891de3e7287d71b522a219ea25ecc784047134dd19cda4 Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.063790 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" podUID="f67eee76-7e8d-4b82-aa0a-b5a8600de493" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.063930 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" podUID="a6ef169a-4706-4704-bc8a-4afe5a1d4ac9" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.064035 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" podUID="e370b52c-c5be-4584-ada8-183e5d79e1f5" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.066038 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj"] Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.067913 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2xrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-78lhr_openstack-operators(7901b678-edf0-4df9-8896-c596d2eab813): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.070986 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2xrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-78lhr_openstack-operators(7901b678-edf0-4df9-8896-c596d2eab813): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.072763 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" podUID="7901b678-edf0-4df9-8896-c596d2eab813" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.075791 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv"] Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.080131 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr"] Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.084296 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nnvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mdwwd_openstack-operators(ceece6b3-6e91-4afc-9f75-604473b84a44): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.084755 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn"] Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.085446 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" podUID="ceece6b3-6e91-4afc-9f75-604473b84a44" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.088502 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" event={"ID":"90f13638-1211-4cdc-9c96-298ae112e911","Type":"ContainerStarted","Data":"929b26772b4a4b3824a15094883e3be2ef7d076d1b17d9a990384e20a41e7ce8"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.090048 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" event={"ID":"b2312412-3b86-40c1-9cf8-32d59d3a3a4e","Type":"ContainerStarted","Data":"8afa1cdfa070d8f5b26dafa0253b3ed7a8ed3ce4a5a64ee466606dea2d00bc1d"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.091548 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" event={"ID":"a6ef169a-4706-4704-bc8a-4afe5a1d4ac9","Type":"ContainerStarted","Data":"15849a62db52e9679c41d229f5292c1011678f2a3ad9d9769494e054e59265a0"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.093398 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd"] Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.093572 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" podUID="a6ef169a-4706-4704-bc8a-4afe5a1d4ac9" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.093905 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" event={"ID":"6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495","Type":"ContainerStarted","Data":"1c032c376b39026382810518e938103c4e4d0f20ec6e95fec3bd7ba7806bc244"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.103271 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" event={"ID":"c6afcb92-eb6f-4615-8b25-bcdc77eda80e","Type":"ContainerStarted","Data":"7a685c8154615e202806a0f4fb7677ea2d406a46b4ab09888d327713de35252b"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.103849 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv"] Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.106910 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" event={"ID":"98c32660-966d-43a1-932d-4ca2af418bf5","Type":"ContainerStarted","Data":"1870338dc1e85d3687bdd0429c90acba3640455394ea8cc496c30ac52c88fc28"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.109268 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j"] Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.110539 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" event={"ID":"a79b1912-6054-4cc9-a584-c7e3e6ca9a31","Type":"ContainerStarted","Data":"5141f2e199825a085e299dc9c0877a887a81c1ec339f1d6f8ab7207d61e3f4e0"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.113092 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" event={"ID":"da380dce-d4c5-41ed-8273-648f6ad79d43","Type":"ContainerStarted","Data":"9bd82641c651cb88e4d54bac13174fabfb652aa4cfff2b49784f94249bb59800"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.114139 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" event={"ID":"7901b678-edf0-4df9-8896-c596d2eab813","Type":"ContainerStarted","Data":"c01d31f8a6b280e178891de3e7287d71b522a219ea25ecc784047134dd19cda4"} Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.116793 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" podUID="7901b678-edf0-4df9-8896-c596d2eab813" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.116824 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" event={"ID":"60b341d0-be93-4332-8eb3-356d0a0b4ee4","Type":"ContainerStarted","Data":"a91ca4242b1d6cfb34d2b355b41c96e91509d5b1aea3e881d73138715c00d07d"} Dec 02 14:00:01 crc kubenswrapper[4900]: W1202 14:00:01.117460 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeab7da61_f654_4f78_8dfa_4ede5002df86.slice/crio-9fb1f0ff3123288757767b106c18e1ccdfb8db33eec8dfb72fefa79dde7139d6 WatchSource:0}: Error finding container 9fb1f0ff3123288757767b106c18e1ccdfb8db33eec8dfb72fefa79dde7139d6: Status 404 returned error can't find the container with id 9fb1f0ff3123288757767b106c18e1ccdfb8db33eec8dfb72fefa79dde7139d6 Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.118131 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" event={"ID":"9127ec85-11f3-4526-bda2-884648292518","Type":"ContainerStarted","Data":"27cafaf8d102ba885fa3b9f482ef765c15b33a3b610ceb9a8b21a6d40ed9abc6"} Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.119498 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sthcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-2pspv_openstack-operators(eab7da61-f654-4f78-8dfa-4ede5002df86): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.119610 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" event={"ID":"76472776-56db-440f-a0a5-5a45eaa83baa","Type":"ContainerStarted","Data":"9c104273fa1a44dae0712b82a4783edc7dfbf00853bbee397c98ee7b99db3f38"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.120758 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" event={"ID":"6c51856b-78db-4067-aec4-bdbb2513d6d3","Type":"ContainerStarted","Data":"a96348de63a780fbc2c306f4c23009ee1fdbfaf9035f7c01662bbc4834133752"} Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.121195 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sthcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-2pspv_openstack-operators(eab7da61-f654-4f78-8dfa-4ede5002df86): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.121879 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" event={"ID":"b45635de-34ee-4361-b519-a12f95d3849b","Type":"ContainerStarted","Data":"b4553dd62074a190cfec8c3b0b4c833155625846b3f91986cda5ae7ad0486df5"} Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.122245 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" podUID="eab7da61-f654-4f78-8dfa-4ede5002df86" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.124510 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" event={"ID":"e370b52c-c5be-4584-ada8-183e5d79e1f5","Type":"ContainerStarted","Data":"69afc970d7f8693d2b2cdc8cb39cdae36bf55b477891e6620ceef1e7c0065272"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.129933 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" event={"ID":"a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18","Type":"ContainerStarted","Data":"9b22e9c62e96ab995a8565210ad93b1839b0078974fadb6936586c51dad6b50d"} Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.132386 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" podUID="e370b52c-c5be-4584-ada8-183e5d79e1f5" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.132857 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" event={"ID":"6a464001-7dd2-4485-ba44-3c1dcd166c05","Type":"ContainerStarted","Data":"ab47859e7e78a31140b1675fcee4f9597a3c70f67dcc1d49059ebb274a1ae228"} Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.135739 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" event={"ID":"f67eee76-7e8d-4b82-aa0a-b5a8600de493","Type":"ContainerStarted","Data":"d27de32cf00fefc970739d201312d976fa1f4235b5eaea45dda933582f9ec76e"} Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.135962 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" podUID="6a464001-7dd2-4485-ba44-3c1dcd166c05" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.137003 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" podUID="f67eee76-7e8d-4b82-aa0a-b5a8600de493" Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.287780 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.288021 4900 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.288107 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert podName:58fb2457-4246-4898-98d3-c33292975d8e nodeName:}" failed. No retries permitted until 2025-12-02 14:00:03.288086629 +0000 UTC m=+1048.703900500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" (UID: "58fb2457-4246-4898-98d3-c33292975d8e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.593505 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.593852 4900 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.594489 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:03.594451288 +0000 UTC m=+1049.010265189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "webhook-server-cert" not found Dec 02 14:00:01 crc kubenswrapper[4900]: I1202 14:00:01.594538 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.594797 4900 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:00:01 crc kubenswrapper[4900]: E1202 14:00:01.594905 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:03.59488076 +0000 UTC m=+1049.010694711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "metrics-server-cert" not found Dec 02 14:00:02 crc kubenswrapper[4900]: I1202 14:00:02.142592 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" event={"ID":"ceece6b3-6e91-4afc-9f75-604473b84a44","Type":"ContainerStarted","Data":"e5bcfc7f4dfe1784802b19334b05a0ee8b55ae660d74caf0590ee47d567f43ee"} Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.144306 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" podUID="ceece6b3-6e91-4afc-9f75-604473b84a44" Dec 02 14:00:02 crc kubenswrapper[4900]: I1202 14:00:02.144847 4900 generic.go:334] "Generic (PLEG): container finished" podID="7b0e9f79-e00b-4f50-9dac-35ba58716c2a" containerID="de128c0018ee69f7637c11ccbe47246a83da49088a4c5ba17152d641b243a8a6" exitCode=0 Dec 02 14:00:02 crc kubenswrapper[4900]: I1202 14:00:02.145033 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" event={"ID":"7b0e9f79-e00b-4f50-9dac-35ba58716c2a","Type":"ContainerDied","Data":"de128c0018ee69f7637c11ccbe47246a83da49088a4c5ba17152d641b243a8a6"} Dec 02 14:00:02 crc kubenswrapper[4900]: I1202 14:00:02.145106 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" event={"ID":"7b0e9f79-e00b-4f50-9dac-35ba58716c2a","Type":"ContainerStarted","Data":"fef59e9270d3f5091d3749bc24d1cfef4c01cb928e3d1a7f5a35025599f81eb3"} Dec 02 14:00:02 crc kubenswrapper[4900]: I1202 14:00:02.147539 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" event={"ID":"eab7da61-f654-4f78-8dfa-4ede5002df86","Type":"ContainerStarted","Data":"9fb1f0ff3123288757767b106c18e1ccdfb8db33eec8dfb72fefa79dde7139d6"} Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.148967 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" podUID="7901b678-edf0-4df9-8896-c596d2eab813" Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.150102 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" podUID="6a464001-7dd2-4485-ba44-3c1dcd166c05" Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.150247 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" podUID="a6ef169a-4706-4704-bc8a-4afe5a1d4ac9" Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.152220 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" podUID="e370b52c-c5be-4584-ada8-183e5d79e1f5" Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.152379 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" podUID="f67eee76-7e8d-4b82-aa0a-b5a8600de493" Dec 02 14:00:02 crc kubenswrapper[4900]: E1202 14:00:02.152481 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" podUID="eab7da61-f654-4f78-8dfa-4ede5002df86" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.024001 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.024218 4900 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.024737 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert podName:2648e9ae-b5db-4196-a921-5a708baae84d nodeName:}" failed. No retries permitted until 2025-12-02 14:00:07.024716304 +0000 UTC m=+1052.440530155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert") pod "infra-operator-controller-manager-57548d458d-6479q" (UID: "2648e9ae-b5db-4196-a921-5a708baae84d") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.155348 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" podUID="ceece6b3-6e91-4afc-9f75-604473b84a44" Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.155500 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" podUID="eab7da61-f654-4f78-8dfa-4ede5002df86" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.330846 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.331141 4900 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.331286 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert podName:58fb2457-4246-4898-98d3-c33292975d8e nodeName:}" failed. No retries permitted until 2025-12-02 14:00:07.331261208 +0000 UTC m=+1052.747075079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" (UID: "58fb2457-4246-4898-98d3-c33292975d8e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.598322 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.636379 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.636467 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.636595 4900 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.636713 4900 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.636793 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:07.636750373 +0000 UTC m=+1053.052564224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "webhook-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: E1202 14:00:03.636813 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:07.636805094 +0000 UTC m=+1053.052618945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "metrics-server-cert" not found Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.737457 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-config-volume\") pod \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.737554 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2w2s\" (UniqueName: \"kubernetes.io/projected/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-kube-api-access-p2w2s\") pod \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.737598 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-secret-volume\") pod \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\" (UID: \"7b0e9f79-e00b-4f50-9dac-35ba58716c2a\") " Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.738349 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-config-volume" (OuterVolumeSpecName: "config-volume") pod "7b0e9f79-e00b-4f50-9dac-35ba58716c2a" (UID: "7b0e9f79-e00b-4f50-9dac-35ba58716c2a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.748782 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-kube-api-access-p2w2s" (OuterVolumeSpecName: "kube-api-access-p2w2s") pod "7b0e9f79-e00b-4f50-9dac-35ba58716c2a" (UID: "7b0e9f79-e00b-4f50-9dac-35ba58716c2a"). InnerVolumeSpecName "kube-api-access-p2w2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.756871 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7b0e9f79-e00b-4f50-9dac-35ba58716c2a" (UID: "7b0e9f79-e00b-4f50-9dac-35ba58716c2a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.845348 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.845396 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:03 crc kubenswrapper[4900]: I1202 14:00:03.845410 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2w2s\" (UniqueName: \"kubernetes.io/projected/7b0e9f79-e00b-4f50-9dac-35ba58716c2a-kube-api-access-p2w2s\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:04 crc kubenswrapper[4900]: I1202 14:00:04.179360 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" event={"ID":"7b0e9f79-e00b-4f50-9dac-35ba58716c2a","Type":"ContainerDied","Data":"fef59e9270d3f5091d3749bc24d1cfef4c01cb928e3d1a7f5a35025599f81eb3"} Dec 02 14:00:04 crc kubenswrapper[4900]: I1202 14:00:04.179419 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef59e9270d3f5091d3749bc24d1cfef4c01cb928e3d1a7f5a35025599f81eb3" Dec 02 14:00:04 crc kubenswrapper[4900]: I1202 14:00:04.179381 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j" Dec 02 14:00:04 crc kubenswrapper[4900]: I1202 14:00:04.180824 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" event={"ID":"90f13638-1211-4cdc-9c96-298ae112e911","Type":"ContainerStarted","Data":"7eff552d82f2b8ac07f1de65ae46ce522c3973236401c26dae8d35db0f123610"} Dec 02 14:00:07 crc kubenswrapper[4900]: I1202 14:00:07.032126 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.032352 4900 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.032440 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert podName:2648e9ae-b5db-4196-a921-5a708baae84d nodeName:}" failed. No retries permitted until 2025-12-02 14:00:15.032416147 +0000 UTC m=+1060.448229998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert") pod "infra-operator-controller-manager-57548d458d-6479q" (UID: "2648e9ae-b5db-4196-a921-5a708baae84d") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: I1202 14:00:07.335371 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.335626 4900 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.335700 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert podName:58fb2457-4246-4898-98d3-c33292975d8e nodeName:}" failed. No retries permitted until 2025-12-02 14:00:15.335683819 +0000 UTC m=+1060.751497670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" (UID: "58fb2457-4246-4898-98d3-c33292975d8e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.639428 4900 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: I1202 14:00:07.640425 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.640490 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:15.640472294 +0000 UTC m=+1061.056286145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "metrics-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: I1202 14:00:07.640662 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.640774 4900 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:00:07 crc kubenswrapper[4900]: E1202 14:00:07.640808 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:15.640802063 +0000 UTC m=+1061.056615914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "webhook-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: I1202 14:00:15.070837 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.071029 4900 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.071388 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert podName:2648e9ae-b5db-4196-a921-5a708baae84d nodeName:}" failed. No retries permitted until 2025-12-02 14:00:31.071374789 +0000 UTC m=+1076.487188640 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert") pod "infra-operator-controller-manager-57548d458d-6479q" (UID: "2648e9ae-b5db-4196-a921-5a708baae84d") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: I1202 14:00:15.375158 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.375444 4900 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.375541 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert podName:58fb2457-4246-4898-98d3-c33292975d8e nodeName:}" failed. No retries permitted until 2025-12-02 14:00:31.375517206 +0000 UTC m=+1076.791331057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" (UID: "58fb2457-4246-4898-98d3-c33292975d8e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: I1202 14:00:15.679749 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:15 crc kubenswrapper[4900]: I1202 14:00:15.679816 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.679963 4900 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.679964 4900 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.680018 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:31.680003882 +0000 UTC m=+1077.095817733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "metrics-server-cert" not found Dec 02 14:00:15 crc kubenswrapper[4900]: E1202 14:00:15.680035 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs podName:bb6f8bf1-8305-460b-94d3-208e68ad6f52 nodeName:}" failed. No retries permitted until 2025-12-02 14:00:31.680028923 +0000 UTC m=+1077.095842774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-f64kd" (UID: "bb6f8bf1-8305-460b-94d3-208e68ad6f52") : secret "webhook-server-cert" not found Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.289298 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" event={"ID":"76472776-56db-440f-a0a5-5a45eaa83baa","Type":"ContainerStarted","Data":"0d010ad32ce665903890fff2491ae27c47e55bf3db60dea2ebe1a8add3835708"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.290775 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" event={"ID":"a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18","Type":"ContainerStarted","Data":"353c664ac222fbf023d054be8e4e89e831b5a1de8aa3db877d37d44901cc7b19"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.294214 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" event={"ID":"90f13638-1211-4cdc-9c96-298ae112e911","Type":"ContainerStarted","Data":"e1a4961900d25bdec9d1c9acfc1a311f36282f7b680665a7bd867ba01efc2670"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.296143 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.299241 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.302146 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" event={"ID":"6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495","Type":"ContainerStarted","Data":"85dd033759b741dd88b5490b99cfd6561ad5c8d9d92895201d01eb70524cf51c"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.303520 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" event={"ID":"b45635de-34ee-4361-b519-a12f95d3849b","Type":"ContainerStarted","Data":"270907da8c36e2a2b52141c49b379a30c2868450357af7a41878f1434828751a"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.304791 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" event={"ID":"b2312412-3b86-40c1-9cf8-32d59d3a3a4e","Type":"ContainerStarted","Data":"bafdaa3bbf0cd7b2f822f22a11633d8619c2e833cb30af783dafa01c608a3166"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.313756 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" event={"ID":"60b341d0-be93-4332-8eb3-356d0a0b4ee4","Type":"ContainerStarted","Data":"876e934d98bc6b511be242f82c39e745a349644307173d68945982b5448d2d4c"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.319113 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" event={"ID":"9127ec85-11f3-4526-bda2-884648292518","Type":"ContainerStarted","Data":"21d4252ff9aa6d6df92a5161a075be8ce32fde39fa43a08d30c338d514e0c222"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.320533 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-snhnc" podStartSLOduration=2.290026692 podStartE2EDuration="18.320498692s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.381545483 +0000 UTC m=+1045.797359324" lastFinishedPulling="2025-12-02 14:00:16.412017473 +0000 UTC m=+1061.827831324" observedRunningTime="2025-12-02 14:00:17.320498962 +0000 UTC m=+1062.736312813" watchObservedRunningTime="2025-12-02 14:00:17.320498692 +0000 UTC m=+1062.736312543" Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.329765 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" event={"ID":"c6afcb92-eb6f-4615-8b25-bcdc77eda80e","Type":"ContainerStarted","Data":"82875c82bca599bbfc259b71e85191a04ea7cdb560ff7816eff56fe3149eea01"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.335654 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" event={"ID":"da380dce-d4c5-41ed-8273-648f6ad79d43","Type":"ContainerStarted","Data":"40dff1abccc39d5b28950c8236fd006729c8d7ccaf8e772cd84db30ff950e8d6"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.338251 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" event={"ID":"7901b678-edf0-4df9-8896-c596d2eab813","Type":"ContainerStarted","Data":"1717722bf38a40c34d34e1856e207e63e5430e8c55f92739118ebc8c15f9c838"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.343630 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" event={"ID":"a79b1912-6054-4cc9-a584-c7e3e6ca9a31","Type":"ContainerStarted","Data":"769b1e7bdbe7ddbb88b98c415ecf62c9d9125932d539be57ef954941210e6d5e"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.351692 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" event={"ID":"98c32660-966d-43a1-932d-4ca2af418bf5","Type":"ContainerStarted","Data":"41b6d3b88340a71199eb432aa3c848576372abbb53c17c7ac399366666b3a82b"} Dec 02 14:00:17 crc kubenswrapper[4900]: I1202 14:00:17.361320 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" event={"ID":"6c51856b-78db-4067-aec4-bdbb2513d6d3","Type":"ContainerStarted","Data":"d07fedcd817167309d9c88b0e62f357f82125871317d7f41bf4df41883dfc084"} Dec 02 14:00:20 crc kubenswrapper[4900]: I1202 14:00:20.388243 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" event={"ID":"a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18","Type":"ContainerStarted","Data":"5c8eee3d80b9d98d88eb7a70090cadb12a1b4490fc7abb28951aaa8fd51d618c"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.418974 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" event={"ID":"b2312412-3b86-40c1-9cf8-32d59d3a3a4e","Type":"ContainerStarted","Data":"8ce3c795098702c805d9c7e806f2decb8131b8e8a2f7f7fc9fdfb7e3e8f2b63d"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.419135 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.422499 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.429163 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" event={"ID":"98c32660-966d-43a1-932d-4ca2af418bf5","Type":"ContainerStarted","Data":"4c97cba92a49573caf1ff60520b7634363b7f8a7ead0500623f9afac2100254b"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.429383 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.434168 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.435394 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" event={"ID":"e370b52c-c5be-4584-ada8-183e5d79e1f5","Type":"ContainerStarted","Data":"1f7d7e9a488c600b3b53d000013e76bf6eb6a66f522a428bbfb7f78652d6db16"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.439371 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" event={"ID":"6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495","Type":"ContainerStarted","Data":"8c13fa4dc542219be8a4eecf8aa42333cfb67dad8385b006b1d31fc06af048b5"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.439509 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.441276 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" event={"ID":"b45635de-34ee-4361-b519-a12f95d3849b","Type":"ContainerStarted","Data":"67d213d31b12f146790942f425a69195421f2bbf12b52b27158c6e6e604dfb56"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.441334 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.445810 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.446378 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.451576 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" event={"ID":"60b341d0-be93-4332-8eb3-356d0a0b4ee4","Type":"ContainerStarted","Data":"c9fb10d5b5f5ec6d194b8ee8cd036def31426e80e921760ec386c7c672466548"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.452330 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.454534 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.468196 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" event={"ID":"a79b1912-6054-4cc9-a584-c7e3e6ca9a31","Type":"ContainerStarted","Data":"25d1d3fc24bba3a4730c94bea170f86bc29a37bed3ddd5c26680e04075b7ad78"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.468510 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.474894 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.497777 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-h6jdq" podStartSLOduration=9.388981057 podStartE2EDuration="22.497761053s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.051667314 +0000 UTC m=+1046.467481165" lastFinishedPulling="2025-12-02 14:00:14.16044731 +0000 UTC m=+1059.576261161" observedRunningTime="2025-12-02 14:00:21.461923792 +0000 UTC m=+1066.877737633" watchObservedRunningTime="2025-12-02 14:00:21.497761053 +0000 UTC m=+1066.913574904" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.502572 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" event={"ID":"9127ec85-11f3-4526-bda2-884648292518","Type":"ContainerStarted","Data":"0557d78e8358a1f3af46f7ea3a65d558ac5ebc5453614a64ab18a39d3e94f768"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.503425 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.510863 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.522374 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ksgnh" podStartSLOduration=9.149810125 podStartE2EDuration="22.52236102s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.788944525 +0000 UTC m=+1046.204758376" lastFinishedPulling="2025-12-02 14:00:14.16149542 +0000 UTC m=+1059.577309271" observedRunningTime="2025-12-02 14:00:21.498809642 +0000 UTC m=+1066.914623493" watchObservedRunningTime="2025-12-02 14:00:21.52236102 +0000 UTC m=+1066.938174871" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.558617 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-cwkch" podStartSLOduration=9.180687128 podStartE2EDuration="22.558601183s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.782946767 +0000 UTC m=+1046.198760618" lastFinishedPulling="2025-12-02 14:00:14.160860822 +0000 UTC m=+1059.576674673" observedRunningTime="2025-12-02 14:00:21.522910276 +0000 UTC m=+1066.938724127" watchObservedRunningTime="2025-12-02 14:00:21.558601183 +0000 UTC m=+1066.974415034" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.580078 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" event={"ID":"76472776-56db-440f-a0a5-5a45eaa83baa","Type":"ContainerStarted","Data":"a0ad710aaaf0d447a81146246ccc236a8e671b0ff4b4b0d81371f2f2d7f31285"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.580383 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.599883 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" event={"ID":"c6afcb92-eb6f-4615-8b25-bcdc77eda80e","Type":"ContainerStarted","Data":"f33198a6fff2300e773be8d8e9f5eae606a29de49eb8a9d0459edaf215f97a79"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.600394 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.600840 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.605897 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.622547 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-79n4c" podStartSLOduration=9.217875167 podStartE2EDuration="22.622526779s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.755363496 +0000 UTC m=+1046.171177347" lastFinishedPulling="2025-12-02 14:00:14.160015068 +0000 UTC m=+1059.575828959" observedRunningTime="2025-12-02 14:00:21.600797612 +0000 UTC m=+1067.016611463" watchObservedRunningTime="2025-12-02 14:00:21.622526779 +0000 UTC m=+1067.038340650" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.630148 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" event={"ID":"da380dce-d4c5-41ed-8273-648f6ad79d43","Type":"ContainerStarted","Data":"2e495f03a338769a8ecb93d8c27621ba447b7899e9ca4c8e5b2427989e69cf4e"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.631230 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.634164 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.644251 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-z9xzn" podStartSLOduration=9.240282602 podStartE2EDuration="22.644230215s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.754962735 +0000 UTC m=+1046.170776586" lastFinishedPulling="2025-12-02 14:00:14.158910348 +0000 UTC m=+1059.574724199" observedRunningTime="2025-12-02 14:00:21.632548189 +0000 UTC m=+1067.048362050" watchObservedRunningTime="2025-12-02 14:00:21.644230215 +0000 UTC m=+1067.060044066" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.647821 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" event={"ID":"7901b678-edf0-4df9-8896-c596d2eab813","Type":"ContainerStarted","Data":"a4e0956263f3e2f68683117e0b4278c2f6a9e75c045071dddf65edd7e9ef6c8c"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.648737 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.663539 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.667901 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" event={"ID":"6c51856b-78db-4067-aec4-bdbb2513d6d3","Type":"ContainerStarted","Data":"5d23048a83407378cfab439d8a6107eae4bb782c5a0931043ea1683875e77037"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.668568 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.671960 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" event={"ID":"6a464001-7dd2-4485-ba44-3c1dcd166c05","Type":"ContainerStarted","Data":"d370dad9e76294cce8ec3467a56ed570e673c9b0f0638b8b3269039406f62e2c"} Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.672200 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.677826 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.681889 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.694718 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-l8dqg" podStartSLOduration=9.283590263 podStartE2EDuration="22.694694835s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.75010256 +0000 UTC m=+1046.165916401" lastFinishedPulling="2025-12-02 14:00:14.161207072 +0000 UTC m=+1059.577020973" observedRunningTime="2025-12-02 14:00:21.663788232 +0000 UTC m=+1067.079602083" watchObservedRunningTime="2025-12-02 14:00:21.694694835 +0000 UTC m=+1067.110508686" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.695182 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-77vc7" podStartSLOduration=9.288394408 podStartE2EDuration="22.695177179s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.75475622 +0000 UTC m=+1046.170570071" lastFinishedPulling="2025-12-02 14:00:14.161538951 +0000 UTC m=+1059.577352842" observedRunningTime="2025-12-02 14:00:21.681873857 +0000 UTC m=+1067.097687718" watchObservedRunningTime="2025-12-02 14:00:21.695177179 +0000 UTC m=+1067.110991030" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.702085 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-s8q5z" podStartSLOduration=9.53513593 podStartE2EDuration="22.702070871s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.996563675 +0000 UTC m=+1046.412377516" lastFinishedPulling="2025-12-02 14:00:14.163498566 +0000 UTC m=+1059.579312457" observedRunningTime="2025-12-02 14:00:21.700721353 +0000 UTC m=+1067.116535204" watchObservedRunningTime="2025-12-02 14:00:21.702070871 +0000 UTC m=+1067.117884722" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.736861 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htb7b" podStartSLOduration=9.313419817 podStartE2EDuration="22.736843773s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.740260465 +0000 UTC m=+1046.156074316" lastFinishedPulling="2025-12-02 14:00:14.163684421 +0000 UTC m=+1059.579498272" observedRunningTime="2025-12-02 14:00:21.732500671 +0000 UTC m=+1067.148314522" watchObservedRunningTime="2025-12-02 14:00:21.736843773 +0000 UTC m=+1067.152657624" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.768757 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-r8t7n" podStartSLOduration=8.462545176 podStartE2EDuration="22.768738344s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.73866989 +0000 UTC m=+1046.154483741" lastFinishedPulling="2025-12-02 14:00:15.044863058 +0000 UTC m=+1060.460676909" observedRunningTime="2025-12-02 14:00:21.768081005 +0000 UTC m=+1067.183894856" watchObservedRunningTime="2025-12-02 14:00:21.768738344 +0000 UTC m=+1067.184552195" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.833477 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hh8s2" podStartSLOduration=9.453318465 podStartE2EDuration="22.833464942s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.779857061 +0000 UTC m=+1046.195670902" lastFinishedPulling="2025-12-02 14:00:14.160003528 +0000 UTC m=+1059.575817379" observedRunningTime="2025-12-02 14:00:21.831833596 +0000 UTC m=+1067.247647447" watchObservedRunningTime="2025-12-02 14:00:21.833464942 +0000 UTC m=+1067.249278793" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.836219 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cdb56" podStartSLOduration=9.011544264 podStartE2EDuration="22.836211279s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:00.335388404 +0000 UTC m=+1045.751202255" lastFinishedPulling="2025-12-02 14:00:14.160055379 +0000 UTC m=+1059.575869270" observedRunningTime="2025-12-02 14:00:21.807947569 +0000 UTC m=+1067.223761430" watchObservedRunningTime="2025-12-02 14:00:21.836211279 +0000 UTC m=+1067.252025120" Dec 02 14:00:21 crc kubenswrapper[4900]: I1202 14:00:21.900134 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-78lhr" podStartSLOduration=7.550071944 podStartE2EDuration="22.900119114s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.067781965 +0000 UTC m=+1046.483595826" lastFinishedPulling="2025-12-02 14:00:16.417829145 +0000 UTC m=+1061.833642996" observedRunningTime="2025-12-02 14:00:21.894838266 +0000 UTC m=+1067.310652107" watchObservedRunningTime="2025-12-02 14:00:21.900119114 +0000 UTC m=+1067.315932965" Dec 02 14:00:23 crc kubenswrapper[4900]: I1202 14:00:23.693537 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" event={"ID":"e370b52c-c5be-4584-ada8-183e5d79e1f5","Type":"ContainerStarted","Data":"558f38add1dd052c852bbf2c102f3aa5d9a8cc17dd07e226c823339e1c43b7a3"} Dec 02 14:00:23 crc kubenswrapper[4900]: I1202 14:00:23.728259 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" podStartSLOduration=6.421300028 podStartE2EDuration="24.728236515s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.060609674 +0000 UTC m=+1046.476423525" lastFinishedPulling="2025-12-02 14:00:19.367546141 +0000 UTC m=+1064.783360012" observedRunningTime="2025-12-02 14:00:23.717368832 +0000 UTC m=+1069.133182723" watchObservedRunningTime="2025-12-02 14:00:23.728236515 +0000 UTC m=+1069.144050406" Dec 02 14:00:24 crc kubenswrapper[4900]: I1202 14:00:24.704750 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 14:00:25 crc kubenswrapper[4900]: I1202 14:00:25.712966 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-4klhn" Dec 02 14:00:29 crc kubenswrapper[4900]: I1202 14:00:29.736779 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" event={"ID":"ceece6b3-6e91-4afc-9f75-604473b84a44","Type":"ContainerStarted","Data":"0ea9f28ea6bbd920c3999faf2344c1b747fd3236afd833733449ff202d1e92b3"} Dec 02 14:00:29 crc kubenswrapper[4900]: I1202 14:00:29.738864 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" event={"ID":"a6ef169a-4706-4704-bc8a-4afe5a1d4ac9","Type":"ContainerStarted","Data":"6a2b6a7821e8508106ae387e7016a96133ab4c15077389eeb74b523df92bcbae"} Dec 02 14:00:29 crc kubenswrapper[4900]: I1202 14:00:29.740711 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" event={"ID":"6a464001-7dd2-4485-ba44-3c1dcd166c05","Type":"ContainerStarted","Data":"3b96b72c9a3e14434cef7b751bc6c67a2eeab9a0a90928ff09433edb40995a2d"} Dec 02 14:00:29 crc kubenswrapper[4900]: I1202 14:00:29.742033 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" event={"ID":"f67eee76-7e8d-4b82-aa0a-b5a8600de493","Type":"ContainerStarted","Data":"f7b95ad8a91e1899d361fd3fefb9f0da302bbef0b5011ddfa158f362aa707c58"} Dec 02 14:00:29 crc kubenswrapper[4900]: I1202 14:00:29.743371 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" event={"ID":"eab7da61-f654-4f78-8dfa-4ede5002df86","Type":"ContainerStarted","Data":"0204c6f281b3ddaca7c7b667fdc117f52586775e72ca464aef0a647c43f2722e"} Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.758325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" event={"ID":"eab7da61-f654-4f78-8dfa-4ede5002df86","Type":"ContainerStarted","Data":"ea37df9f20db0d25f549e0777d8b164e09743985c6a87ac67f998569c5e60654"} Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.760983 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.778225 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" event={"ID":"a6ef169a-4706-4704-bc8a-4afe5a1d4ac9","Type":"ContainerStarted","Data":"aa1369cea4417bc3376823d5b2dc0dbe8996737bfaebb63030befd446a8ca554"} Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.778336 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.785719 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" event={"ID":"f67eee76-7e8d-4b82-aa0a-b5a8600de493","Type":"ContainerStarted","Data":"0c81adbd8d98a2f8d73e7fd6ad6fb36b55237a992afc25789187f93a2f3bf5e5"} Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.785793 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.785891 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.787885 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.805597 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" podStartSLOduration=3.777260911 podStartE2EDuration="31.805573343s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.119408507 +0000 UTC m=+1046.535222358" lastFinishedPulling="2025-12-02 14:00:29.147720899 +0000 UTC m=+1074.563534790" observedRunningTime="2025-12-02 14:00:30.801492269 +0000 UTC m=+1076.217306130" watchObservedRunningTime="2025-12-02 14:00:30.805573343 +0000 UTC m=+1076.221387244" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.827873 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" podStartSLOduration=3.756747169 podStartE2EDuration="31.827830635s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.05829623 +0000 UTC m=+1046.474110081" lastFinishedPulling="2025-12-02 14:00:29.129379656 +0000 UTC m=+1074.545193547" observedRunningTime="2025-12-02 14:00:30.822014163 +0000 UTC m=+1076.237828024" watchObservedRunningTime="2025-12-02 14:00:30.827830635 +0000 UTC m=+1076.243644496" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.845931 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" podStartSLOduration=3.775473431 podStartE2EDuration="31.84591536s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.058552167 +0000 UTC m=+1046.474366018" lastFinishedPulling="2025-12-02 14:00:29.128994086 +0000 UTC m=+1074.544807947" observedRunningTime="2025-12-02 14:00:30.843877674 +0000 UTC m=+1076.259691535" watchObservedRunningTime="2025-12-02 14:00:30.84591536 +0000 UTC m=+1076.261729211" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.878640 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7h9xf" podStartSLOduration=13.526785592 podStartE2EDuration="31.878613324s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.053420673 +0000 UTC m=+1046.469234554" lastFinishedPulling="2025-12-02 14:00:19.405248415 +0000 UTC m=+1064.821062286" observedRunningTime="2025-12-02 14:00:30.862215626 +0000 UTC m=+1076.278029487" watchObservedRunningTime="2025-12-02 14:00:30.878613324 +0000 UTC m=+1076.294427195" Dec 02 14:00:30 crc kubenswrapper[4900]: I1202 14:00:30.899321 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdwwd" podStartSLOduration=3.854417127 podStartE2EDuration="31.899306562s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:01.084148672 +0000 UTC m=+1046.499962523" lastFinishedPulling="2025-12-02 14:00:29.129038077 +0000 UTC m=+1074.544851958" observedRunningTime="2025-12-02 14:00:30.898121229 +0000 UTC m=+1076.313935080" watchObservedRunningTime="2025-12-02 14:00:30.899306562 +0000 UTC m=+1076.315120413" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.086041 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.097398 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2648e9ae-b5db-4196-a921-5a708baae84d-cert\") pod \"infra-operator-controller-manager-57548d458d-6479q\" (UID: \"2648e9ae-b5db-4196-a921-5a708baae84d\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.366530 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.390840 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.395690 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58fb2457-4246-4898-98d3-c33292975d8e-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls\" (UID: \"58fb2457-4246-4898-98d3-c33292975d8e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.421773 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.707239 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.707346 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.711670 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.713831 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6f8bf1-8305-460b-94d3-208e68ad6f52-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-f64kd\" (UID: \"bb6f8bf1-8305-460b-94d3-208e68ad6f52\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.818533 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.895606 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-6479q"] Dec 02 14:00:31 crc kubenswrapper[4900]: W1202 14:00:31.906508 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2648e9ae_b5db_4196_a921_5a708baae84d.slice/crio-3dc1112f8bcb002fff806b843dea8fe7806d2033f53a32864b986ec91bfddbf4 WatchSource:0}: Error finding container 3dc1112f8bcb002fff806b843dea8fe7806d2033f53a32864b986ec91bfddbf4: Status 404 returned error can't find the container with id 3dc1112f8bcb002fff806b843dea8fe7806d2033f53a32864b986ec91bfddbf4 Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.909104 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:00:31 crc kubenswrapper[4900]: I1202 14:00:31.929444 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls"] Dec 02 14:00:31 crc kubenswrapper[4900]: W1202 14:00:31.942817 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58fb2457_4246_4898_98d3_c33292975d8e.slice/crio-bf0529e72f968ce4d2f162220e342c031ba3f02166f0efaa017673bff5510154 WatchSource:0}: Error finding container bf0529e72f968ce4d2f162220e342c031ba3f02166f0efaa017673bff5510154: Status 404 returned error can't find the container with id bf0529e72f968ce4d2f162220e342c031ba3f02166f0efaa017673bff5510154 Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.054296 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd"] Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.815027 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" event={"ID":"bb6f8bf1-8305-460b-94d3-208e68ad6f52","Type":"ContainerStarted","Data":"c1704b2668c760222298854821ded194df7c8c85f69a427f1e91accdb0bb5894"} Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.815432 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.815454 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" event={"ID":"bb6f8bf1-8305-460b-94d3-208e68ad6f52","Type":"ContainerStarted","Data":"b257a17a4db9203168c139467b08115ad6190c0a5ba7ed9366119a8dd7dcfac0"} Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.822873 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" event={"ID":"2648e9ae-b5db-4196-a921-5a708baae84d","Type":"ContainerStarted","Data":"3dc1112f8bcb002fff806b843dea8fe7806d2033f53a32864b986ec91bfddbf4"} Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.829546 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" event={"ID":"58fb2457-4246-4898-98d3-c33292975d8e","Type":"ContainerStarted","Data":"bf0529e72f968ce4d2f162220e342c031ba3f02166f0efaa017673bff5510154"} Dec 02 14:00:32 crc kubenswrapper[4900]: I1202 14:00:32.856977 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" podStartSLOduration=33.856948823 podStartE2EDuration="33.856948823s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:00:32.849835014 +0000 UTC m=+1078.265648905" watchObservedRunningTime="2025-12-02 14:00:32.856948823 +0000 UTC m=+1078.272762704" Dec 02 14:00:39 crc kubenswrapper[4900]: I1202 14:00:39.863488 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-jkzbj" Dec 02 14:00:39 crc kubenswrapper[4900]: I1202 14:00:39.905363 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vk8cv" Dec 02 14:00:40 crc kubenswrapper[4900]: I1202 14:00:40.312524 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-2pspv" Dec 02 14:00:41 crc kubenswrapper[4900]: I1202 14:00:41.829206 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-f64kd" Dec 02 14:00:43 crc kubenswrapper[4900]: E1202 14:00:43.322788 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1637905685/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" Dec 02 14:00:43 crc kubenswrapper[4900]: E1202 14:00:43.324138 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpd68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls_openstack-operators(58fb2457-4246-4898-98d3-c33292975d8e): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1637905685/1\": happened during read: context canceled" logger="UnhandledError" Dec 02 14:00:44 crc kubenswrapper[4900]: I1202 14:00:44.974618 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" event={"ID":"58fb2457-4246-4898-98d3-c33292975d8e","Type":"ContainerStarted","Data":"129ba9e5a578faffabda318273c255af4ac43de86f7436c47774678e649cada6"} Dec 02 14:00:45 crc kubenswrapper[4900]: I1202 14:00:45.116366 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:00:45 crc kubenswrapper[4900]: I1202 14:00:45.116462 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:00:47 crc kubenswrapper[4900]: E1202 14:00:47.230600 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1637905685/1\\\": happened during read: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" podUID="58fb2457-4246-4898-98d3-c33292975d8e" Dec 02 14:00:48 crc kubenswrapper[4900]: E1202 14:00:48.012181 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" podUID="58fb2457-4246-4898-98d3-c33292975d8e" Dec 02 14:00:50 crc kubenswrapper[4900]: I1202 14:00:50.028572 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" event={"ID":"2648e9ae-b5db-4196-a921-5a708baae84d","Type":"ContainerStarted","Data":"31101b6192ed15cf4ad4694ee84d041689e7b2c32e6d69df6f911c5569165955"} Dec 02 14:00:50 crc kubenswrapper[4900]: I1202 14:00:50.029090 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" event={"ID":"2648e9ae-b5db-4196-a921-5a708baae84d","Type":"ContainerStarted","Data":"ceec2be667970eee2e84f11368f2e924a55813d524d37252f8dd5413339e9a59"} Dec 02 14:00:50 crc kubenswrapper[4900]: I1202 14:00:50.029124 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:00:50 crc kubenswrapper[4900]: I1202 14:00:50.052775 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" podStartSLOduration=33.819794015 podStartE2EDuration="51.052750138s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:31.908921638 +0000 UTC m=+1077.324735489" lastFinishedPulling="2025-12-02 14:00:49.141877751 +0000 UTC m=+1094.557691612" observedRunningTime="2025-12-02 14:00:50.043561982 +0000 UTC m=+1095.459375843" watchObservedRunningTime="2025-12-02 14:00:50.052750138 +0000 UTC m=+1095.468564019" Dec 02 14:01:01 crc kubenswrapper[4900]: I1202 14:01:01.372816 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-6479q" Dec 02 14:01:05 crc kubenswrapper[4900]: I1202 14:01:05.184206 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" event={"ID":"58fb2457-4246-4898-98d3-c33292975d8e","Type":"ContainerStarted","Data":"63c5a1e0e9ced625fbef09c8c731bee28ed29cdd14f4db9cf48d707d5aefd316"} Dec 02 14:01:05 crc kubenswrapper[4900]: I1202 14:01:05.185166 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:01:05 crc kubenswrapper[4900]: I1202 14:01:05.215790 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" podStartSLOduration=33.372116757 podStartE2EDuration="1m6.215768098s" podCreationTimestamp="2025-12-02 13:59:59 +0000 UTC" firstStartedPulling="2025-12-02 14:00:31.946285041 +0000 UTC m=+1077.362098892" lastFinishedPulling="2025-12-02 14:01:04.789936372 +0000 UTC m=+1110.205750233" observedRunningTime="2025-12-02 14:01:05.214426041 +0000 UTC m=+1110.630239932" watchObservedRunningTime="2025-12-02 14:01:05.215768098 +0000 UTC m=+1110.631581959" Dec 02 14:01:11 crc kubenswrapper[4900]: I1202 14:01:11.432184 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls" Dec 02 14:01:15 crc kubenswrapper[4900]: I1202 14:01:15.116716 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:01:15 crc kubenswrapper[4900]: I1202 14:01:15.117152 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.757312 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h5vv9"] Dec 02 14:01:29 crc kubenswrapper[4900]: E1202 14:01:29.758265 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0e9f79-e00b-4f50-9dac-35ba58716c2a" containerName="collect-profiles" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.758280 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0e9f79-e00b-4f50-9dac-35ba58716c2a" containerName="collect-profiles" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.758424 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0e9f79-e00b-4f50-9dac-35ba58716c2a" containerName="collect-profiles" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.759268 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.761322 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bnb6j" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.763198 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.763462 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.763565 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.774676 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h5vv9"] Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.824923 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7v4xb"] Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.826307 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.828203 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.843836 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7v4xb"] Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.861027 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m989t\" (UniqueName: \"kubernetes.io/projected/10830925-2ade-4408-982f-499d3921a45f-kube-api-access-m989t\") pod \"dnsmasq-dns-675f4bcbfc-h5vv9\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.861100 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10830925-2ade-4408-982f-499d3921a45f-config\") pod \"dnsmasq-dns-675f4bcbfc-h5vv9\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.962315 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m989t\" (UniqueName: \"kubernetes.io/projected/10830925-2ade-4408-982f-499d3921a45f-kube-api-access-m989t\") pod \"dnsmasq-dns-675f4bcbfc-h5vv9\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.962702 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-config\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.962737 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10830925-2ade-4408-982f-499d3921a45f-config\") pod \"dnsmasq-dns-675f4bcbfc-h5vv9\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.962831 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.962869 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblt8\" (UniqueName: \"kubernetes.io/projected/1d965d6f-b267-4baf-aa4d-9b681c499d96-kube-api-access-fblt8\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.963498 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10830925-2ade-4408-982f-499d3921a45f-config\") pod \"dnsmasq-dns-675f4bcbfc-h5vv9\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:29 crc kubenswrapper[4900]: I1202 14:01:29.979607 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m989t\" (UniqueName: \"kubernetes.io/projected/10830925-2ade-4408-982f-499d3921a45f-kube-api-access-m989t\") pod \"dnsmasq-dns-675f4bcbfc-h5vv9\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.064308 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-config\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.064375 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.064397 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblt8\" (UniqueName: \"kubernetes.io/projected/1d965d6f-b267-4baf-aa4d-9b681c499d96-kube-api-access-fblt8\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.065303 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.065376 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-config\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.081007 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblt8\" (UniqueName: \"kubernetes.io/projected/1d965d6f-b267-4baf-aa4d-9b681c499d96-kube-api-access-fblt8\") pod \"dnsmasq-dns-78dd6ddcc-7v4xb\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.081814 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.141132 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.582219 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h5vv9"] Dec 02 14:01:30 crc kubenswrapper[4900]: I1202 14:01:30.651483 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7v4xb"] Dec 02 14:01:30 crc kubenswrapper[4900]: W1202 14:01:30.655030 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d965d6f_b267_4baf_aa4d_9b681c499d96.slice/crio-ada85d3963f378a6dfc1c430e9b1e6b8667c3aa6ad2ad5c56e9e190fc70bea3f WatchSource:0}: Error finding container ada85d3963f378a6dfc1c430e9b1e6b8667c3aa6ad2ad5c56e9e190fc70bea3f: Status 404 returned error can't find the container with id ada85d3963f378a6dfc1c430e9b1e6b8667c3aa6ad2ad5c56e9e190fc70bea3f Dec 02 14:01:31 crc kubenswrapper[4900]: I1202 14:01:31.459486 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" event={"ID":"1d965d6f-b267-4baf-aa4d-9b681c499d96","Type":"ContainerStarted","Data":"ada85d3963f378a6dfc1c430e9b1e6b8667c3aa6ad2ad5c56e9e190fc70bea3f"} Dec 02 14:01:31 crc kubenswrapper[4900]: I1202 14:01:31.465184 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" event={"ID":"10830925-2ade-4408-982f-499d3921a45f","Type":"ContainerStarted","Data":"4a26ccea37e27f24a2c074b0312652eaf642726e0ef050c71603ce9b0fea85f0"} Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.579424 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h5vv9"] Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.602125 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5rkb"] Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.603909 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.609831 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5rkb"] Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.705261 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-config\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.705360 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.705390 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscr6\" (UniqueName: \"kubernetes.io/projected/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-kube-api-access-nscr6\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.806468 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.806517 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nscr6\" (UniqueName: \"kubernetes.io/projected/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-kube-api-access-nscr6\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.806578 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-config\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.807455 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-config\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.808817 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.809281 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7v4xb"] Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.828143 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscr6\" (UniqueName: \"kubernetes.io/projected/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-kube-api-access-nscr6\") pod \"dnsmasq-dns-666b6646f7-l5rkb\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.838275 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lrc9k"] Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.841145 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.869705 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lrc9k"] Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.908581 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s767m\" (UniqueName: \"kubernetes.io/projected/1806230e-09b1-43bf-9ee0-5cdedb5f89be-kube-api-access-s767m\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.908791 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.908888 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-config\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:32 crc kubenswrapper[4900]: I1202 14:01:32.938772 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.011463 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s767m\" (UniqueName: \"kubernetes.io/projected/1806230e-09b1-43bf-9ee0-5cdedb5f89be-kube-api-access-s767m\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.011545 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.011599 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-config\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.012664 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.012841 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-config\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.031380 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s767m\" (UniqueName: \"kubernetes.io/projected/1806230e-09b1-43bf-9ee0-5cdedb5f89be-kube-api-access-s767m\") pod \"dnsmasq-dns-57d769cc4f-lrc9k\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.168954 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.486884 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5rkb"] Dec 02 14:01:33 crc kubenswrapper[4900]: W1202 14:01:33.493435 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9e1c67d_fef0_47ac_8ead_d02c015fb6f5.slice/crio-68737e3bc3cbc4c07fbac2a94f97901ccfd52d36e56f522e4da2f3664b28249a WatchSource:0}: Error finding container 68737e3bc3cbc4c07fbac2a94f97901ccfd52d36e56f522e4da2f3664b28249a: Status 404 returned error can't find the container with id 68737e3bc3cbc4c07fbac2a94f97901ccfd52d36e56f522e4da2f3664b28249a Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.509727 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" event={"ID":"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5","Type":"ContainerStarted","Data":"68737e3bc3cbc4c07fbac2a94f97901ccfd52d36e56f522e4da2f3664b28249a"} Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.667544 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lrc9k"] Dec 02 14:01:33 crc kubenswrapper[4900]: W1202 14:01:33.679836 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1806230e_09b1_43bf_9ee0_5cdedb5f89be.slice/crio-0e696dec16deafd2f2340ecfa518edf301dcc0fe2b767bdff5bf31b8fe4a087d WatchSource:0}: Error finding container 0e696dec16deafd2f2340ecfa518edf301dcc0fe2b767bdff5bf31b8fe4a087d: Status 404 returned error can't find the container with id 0e696dec16deafd2f2340ecfa518edf301dcc0fe2b767bdff5bf31b8fe4a087d Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.803245 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.804805 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.806667 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.806843 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.806978 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.807086 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.807455 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qd4b2" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.807580 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.808625 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822315 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822378 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822423 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e410de46-b373-431a-8486-21a6f1268e41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822444 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822492 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822518 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e410de46-b373-431a-8486-21a6f1268e41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822538 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822555 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6dg\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-kube-api-access-lg6dg\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822579 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822595 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.822623 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.842396 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.935728 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.949898 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.949992 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950290 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e410de46-b373-431a-8486-21a6f1268e41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950381 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950428 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e410de46-b373-431a-8486-21a6f1268e41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950461 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950510 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6dg\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-kube-api-access-lg6dg\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950556 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950585 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.950882 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.951981 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.952718 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.953335 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.951182 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.955744 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.958502 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.962828 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e410de46-b373-431a-8486-21a6f1268e41-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.964716 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.965414 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.988923 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e410de46-b373-431a-8486-21a6f1268e41-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.994166 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.996529 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.996806 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gnv2p" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.996923 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.997299 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.997449 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.997525 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 14:01:33 crc kubenswrapper[4900]: I1202 14:01:33.997628 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:33.999479 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6dg\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-kube-api-access-lg6dg\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.008294 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.024410 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " pod="openstack/rabbitmq-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.156830 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157361 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157464 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157532 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157570 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157605 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8db82600-180c-4114-8006-551e1b566ce5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157631 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxjn\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-kube-api-access-9fxjn\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157689 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8db82600-180c-4114-8006-551e1b566ce5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157729 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157746 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157793 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.157819 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259171 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259233 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259269 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259293 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259315 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8db82600-180c-4114-8006-551e1b566ce5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259347 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fxjn\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-kube-api-access-9fxjn\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259382 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8db82600-180c-4114-8006-551e1b566ce5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259435 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259461 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259487 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.259521 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.260935 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.262390 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.262952 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.263286 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.263416 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.266676 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8db82600-180c-4114-8006-551e1b566ce5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.271185 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.271244 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8db82600-180c-4114-8006-551e1b566ce5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.283333 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.285262 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fxjn\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-kube-api-access-9fxjn\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.291631 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.294297 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.387264 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.550164 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" event={"ID":"1806230e-09b1-43bf-9ee0-5cdedb5f89be","Type":"ContainerStarted","Data":"0e696dec16deafd2f2340ecfa518edf301dcc0fe2b767bdff5bf31b8fe4a087d"} Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.602012 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:01:34 crc kubenswrapper[4900]: W1202 14:01:34.619282 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode410de46_b373_431a_8486_21a6f1268e41.slice/crio-7c84e19d11ab25122a8e6f3036114e2f5d4f980a08ce5085085819da5d92664f WatchSource:0}: Error finding container 7c84e19d11ab25122a8e6f3036114e2f5d4f980a08ce5085085819da5d92664f: Status 404 returned error can't find the container with id 7c84e19d11ab25122a8e6f3036114e2f5d4f980a08ce5085085819da5d92664f Dec 02 14:01:34 crc kubenswrapper[4900]: I1202 14:01:34.847495 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:01:34 crc kubenswrapper[4900]: W1202 14:01:34.879357 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8db82600_180c_4114_8006_551e1b566ce5.slice/crio-e3f54f155d443d2b46db54a399c112d35a27629a2c6c43acd873b59741850247 WatchSource:0}: Error finding container e3f54f155d443d2b46db54a399c112d35a27629a2c6c43acd873b59741850247: Status 404 returned error can't find the container with id e3f54f155d443d2b46db54a399c112d35a27629a2c6c43acd873b59741850247 Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.147843 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.150213 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.153145 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.153528 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jmvrc" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.154460 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.155617 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.163695 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.178619 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291298 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291380 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291401 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plsww\" (UniqueName: \"kubernetes.io/projected/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kube-api-access-plsww\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291471 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291628 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291734 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291870 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.291938 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.393563 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.393689 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.394110 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.393717 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.394387 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.394449 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.394468 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plsww\" (UniqueName: \"kubernetes.io/projected/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kube-api-access-plsww\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.394753 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.395585 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.395797 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.395859 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.395962 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.396487 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.399042 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.400057 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.409912 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plsww\" (UniqueName: \"kubernetes.io/projected/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kube-api-access-plsww\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.441142 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.493868 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.563810 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e410de46-b373-431a-8486-21a6f1268e41","Type":"ContainerStarted","Data":"7c84e19d11ab25122a8e6f3036114e2f5d4f980a08ce5085085819da5d92664f"} Dec 02 14:01:35 crc kubenswrapper[4900]: I1202 14:01:35.573039 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8db82600-180c-4114-8006-551e1b566ce5","Type":"ContainerStarted","Data":"e3f54f155d443d2b46db54a399c112d35a27629a2c6c43acd873b59741850247"} Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.540658 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.542317 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.545006 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4w5xg" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.545169 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.545545 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.545601 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.549543 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.621792 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.621862 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.621912 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbv2\" (UniqueName: \"kubernetes.io/projected/ab69f1a2-78df-4097-a527-0b90345cdcfe-kube-api-access-vsbv2\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.621983 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.622009 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.622240 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.622271 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.622299 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724108 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724168 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724213 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbv2\" (UniqueName: \"kubernetes.io/projected/ab69f1a2-78df-4097-a527-0b90345cdcfe-kube-api-access-vsbv2\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724238 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724264 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724288 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724320 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724343 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724674 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.724772 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.726192 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.726503 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.727266 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.729308 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.731839 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.738960 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbv2\" (UniqueName: \"kubernetes.io/projected/ab69f1a2-78df-4097-a527-0b90345cdcfe-kube-api-access-vsbv2\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.743284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.891416 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.967095 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.968390 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.972154 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-f6w2x" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.972557 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.976472 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 02 14:01:36 crc kubenswrapper[4900]: I1202 14:01:36.981067 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.032390 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-memcached-tls-certs\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.032464 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-kolla-config\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.032507 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khcm\" (UniqueName: \"kubernetes.io/projected/812fa799-d734-4151-b87f-25d638295714-kube-api-access-7khcm\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.032565 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-config-data\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.032716 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-combined-ca-bundle\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.133689 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-memcached-tls-certs\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.133746 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-kolla-config\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.133800 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7khcm\" (UniqueName: \"kubernetes.io/projected/812fa799-d734-4151-b87f-25d638295714-kube-api-access-7khcm\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.133936 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-config-data\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.134525 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-kolla-config\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.134589 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-combined-ca-bundle\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.135126 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-config-data\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.151306 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-memcached-tls-certs\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.152863 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-combined-ca-bundle\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.160430 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7khcm\" (UniqueName: \"kubernetes.io/projected/812fa799-d734-4151-b87f-25d638295714-kube-api-access-7khcm\") pod \"memcached-0\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " pod="openstack/memcached-0" Dec 02 14:01:37 crc kubenswrapper[4900]: I1202 14:01:37.298338 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.246989 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.248436 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.251444 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pzvrc" Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.252360 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.369244 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjnc\" (UniqueName: \"kubernetes.io/projected/7d174e45-558a-4540-8ff2-65fbfb554be5-kube-api-access-xdjnc\") pod \"kube-state-metrics-0\" (UID: \"7d174e45-558a-4540-8ff2-65fbfb554be5\") " pod="openstack/kube-state-metrics-0" Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.470997 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjnc\" (UniqueName: \"kubernetes.io/projected/7d174e45-558a-4540-8ff2-65fbfb554be5-kube-api-access-xdjnc\") pod \"kube-state-metrics-0\" (UID: \"7d174e45-558a-4540-8ff2-65fbfb554be5\") " pod="openstack/kube-state-metrics-0" Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.487906 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjnc\" (UniqueName: \"kubernetes.io/projected/7d174e45-558a-4540-8ff2-65fbfb554be5-kube-api-access-xdjnc\") pod \"kube-state-metrics-0\" (UID: \"7d174e45-558a-4540-8ff2-65fbfb554be5\") " pod="openstack/kube-state-metrics-0" Dec 02 14:01:39 crc kubenswrapper[4900]: I1202 14:01:39.622513 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:01:40 crc kubenswrapper[4900]: I1202 14:01:40.118755 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.288335 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gn6td"] Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.289683 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.291506 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.295347 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9cwqh"] Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.296077 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qjlss" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.297007 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.297109 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.366097 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gn6td"] Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.396112 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9cwqh"] Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454782 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-run\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454836 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79247d6-28ab-4234-a191-8799418aa3ea-scripts\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454863 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-lib\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454887 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-ovn-controller-tls-certs\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454916 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-log\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454941 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454957 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run-ovn\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.454993 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7mb\" (UniqueName: \"kubernetes.io/projected/f79247d6-28ab-4234-a191-8799418aa3ea-kube-api-access-hr7mb\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.455034 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-combined-ca-bundle\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.455254 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-etc-ovs\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.455277 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-log-ovn\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.455295 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9gj\" (UniqueName: \"kubernetes.io/projected/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-kube-api-access-pl9gj\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.455582 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-scripts\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557240 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-run\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557294 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79247d6-28ab-4234-a191-8799418aa3ea-scripts\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557320 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-lib\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557343 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-ovn-controller-tls-certs\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557371 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-log\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557396 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557411 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run-ovn\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.557918 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run-ovn\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558010 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558106 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-run\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558150 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-log\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558196 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-lib\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558244 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7mb\" (UniqueName: \"kubernetes.io/projected/f79247d6-28ab-4234-a191-8799418aa3ea-kube-api-access-hr7mb\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558286 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-combined-ca-bundle\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558304 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-etc-ovs\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558319 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-log-ovn\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558939 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl9gj\" (UniqueName: \"kubernetes.io/projected/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-kube-api-access-pl9gj\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558959 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-scripts\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.560744 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-scripts\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558714 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-log-ovn\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.558715 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-etc-ovs\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.561455 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79247d6-28ab-4234-a191-8799418aa3ea-scripts\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.569970 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-combined-ca-bundle\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.570851 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-ovn-controller-tls-certs\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.585349 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7mb\" (UniqueName: \"kubernetes.io/projected/f79247d6-28ab-4234-a191-8799418aa3ea-kube-api-access-hr7mb\") pod \"ovn-controller-ovs-9cwqh\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.586284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl9gj\" (UniqueName: \"kubernetes.io/projected/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-kube-api-access-pl9gj\") pod \"ovn-controller-gn6td\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " pod="openstack/ovn-controller-gn6td" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.688178 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:01:42 crc kubenswrapper[4900]: I1202 14:01:42.690696 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td" Dec 02 14:01:43 crc kubenswrapper[4900]: I1202 14:01:43.641771 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79","Type":"ContainerStarted","Data":"3597d0f3b8bd621961cfd6662c855263bfc068c2011d5e3725f624ab2a68e5bf"} Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.116410 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.116469 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.116509 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.116962 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6f7e930d50720476a444b744878daf723fcb619125b830c5f6dce6cf097c072"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.117014 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://b6f7e930d50720476a444b744878daf723fcb619125b830c5f6dce6cf097c072" gracePeriod=600 Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.558866 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.560355 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.562288 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.564868 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8fzbf" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.565150 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.565504 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.566812 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.575537 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721221 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721569 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8r5q\" (UniqueName: \"kubernetes.io/projected/c353c599-462c-4196-a35c-7622350bb349-kube-api-access-j8r5q\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721605 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721696 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-config\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721829 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721883 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.721910 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.722002 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c353c599-462c-4196-a35c-7622350bb349-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.824880 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-config\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.824952 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.824976 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.824998 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.825043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c353c599-462c-4196-a35c-7622350bb349-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.825112 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.825134 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8r5q\" (UniqueName: \"kubernetes.io/projected/c353c599-462c-4196-a35c-7622350bb349-kube-api-access-j8r5q\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.825152 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.825373 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.826203 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.826384 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-config\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.827134 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c353c599-462c-4196-a35c-7622350bb349-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.832045 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.832825 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.834099 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.846028 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8r5q\" (UniqueName: \"kubernetes.io/projected/c353c599-462c-4196-a35c-7622350bb349-kube-api-access-j8r5q\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.859829 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:45 crc kubenswrapper[4900]: I1202 14:01:45.886162 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.190730 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.193345 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.199359 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.199710 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sqkh5" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.199978 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.200185 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.202724 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.333074 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.333158 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.333191 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.333301 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsjk\" (UniqueName: \"kubernetes.io/projected/096b1286-863b-44aa-ac7e-5cd509d99950-kube-api-access-5lsjk\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.333408 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.334223 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.334387 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.334510 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-config\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.436885 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.436980 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437130 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437131 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsjk\" (UniqueName: \"kubernetes.io/projected/096b1286-863b-44aa-ac7e-5cd509d99950-kube-api-access-5lsjk\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437305 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437348 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437382 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437538 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-config\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.437659 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.439474 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.440356 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-config\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.441428 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.442281 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.444949 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.462352 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.469974 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsjk\" (UniqueName: \"kubernetes.io/projected/096b1286-863b-44aa-ac7e-5cd509d99950-kube-api-access-5lsjk\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.480375 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.515129 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.672928 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="b6f7e930d50720476a444b744878daf723fcb619125b830c5f6dce6cf097c072" exitCode=0 Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.673005 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"b6f7e930d50720476a444b744878daf723fcb619125b830c5f6dce6cf097c072"} Dec 02 14:01:46 crc kubenswrapper[4900]: I1202 14:01:46.673050 4900 scope.go:117] "RemoveContainer" containerID="96fc286beb52d1fa09b32c5aa1607bec1c64d198ad304a0191d978063a0b9ab5" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.891112 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.891885 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m989t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-h5vv9_openstack(10830925-2ade-4408-982f-499d3921a45f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.893419 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" podUID="10830925-2ade-4408-982f-499d3921a45f" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.967374 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.967604 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nscr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-l5rkb_openstack(c9e1c67d-fef0-47ac-8ead-d02c015fb6f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.968932 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.987164 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.987330 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fblt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7v4xb_openstack(1d965d6f-b267-4baf-aa4d-9b681c499d96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.988573 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" podUID="1d965d6f-b267-4baf-aa4d-9b681c499d96" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.998215 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.998377 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s767m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-lrc9k_openstack(1806230e-09b1-43bf-9ee0-5cdedb5f89be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:58 crc kubenswrapper[4900]: E1202 14:01:58.999846 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" Dec 02 14:01:59 crc kubenswrapper[4900]: E1202 14:01:59.802470 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" Dec 02 14:01:59 crc kubenswrapper[4900]: E1202 14:01:59.808145 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" Dec 02 14:01:59 crc kubenswrapper[4900]: E1202 14:01:59.840459 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 02 14:01:59 crc kubenswrapper[4900]: E1202 14:01:59.840618 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lg6dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e410de46-b373-431a-8486-21a6f1268e41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:59 crc kubenswrapper[4900]: E1202 14:01:59.843137 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e410de46-b373-431a-8486-21a6f1268e41" Dec 02 14:02:00 crc kubenswrapper[4900]: E1202 14:02:00.031893 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 02 14:02:00 crc kubenswrapper[4900]: E1202 14:02:00.032317 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fxjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(8db82600-180c-4114-8006-551e1b566ce5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:02:00 crc kubenswrapper[4900]: E1202 14:02:00.033492 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8db82600-180c-4114-8006-551e1b566ce5" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.250949 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.273571 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.437367 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m989t\" (UniqueName: \"kubernetes.io/projected/10830925-2ade-4408-982f-499d3921a45f-kube-api-access-m989t\") pod \"10830925-2ade-4408-982f-499d3921a45f\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.437783 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10830925-2ade-4408-982f-499d3921a45f-config\") pod \"10830925-2ade-4408-982f-499d3921a45f\" (UID: \"10830925-2ade-4408-982f-499d3921a45f\") " Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.437916 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-dns-svc\") pod \"1d965d6f-b267-4baf-aa4d-9b681c499d96\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.437949 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-config\") pod \"1d965d6f-b267-4baf-aa4d-9b681c499d96\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.438010 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblt8\" (UniqueName: \"kubernetes.io/projected/1d965d6f-b267-4baf-aa4d-9b681c499d96-kube-api-access-fblt8\") pod \"1d965d6f-b267-4baf-aa4d-9b681c499d96\" (UID: \"1d965d6f-b267-4baf-aa4d-9b681c499d96\") " Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.438398 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10830925-2ade-4408-982f-499d3921a45f-config" (OuterVolumeSpecName: "config") pod "10830925-2ade-4408-982f-499d3921a45f" (UID: "10830925-2ade-4408-982f-499d3921a45f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.438914 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d965d6f-b267-4baf-aa4d-9b681c499d96" (UID: "1d965d6f-b267-4baf-aa4d-9b681c499d96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.439168 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-config" (OuterVolumeSpecName: "config") pod "1d965d6f-b267-4baf-aa4d-9b681c499d96" (UID: "1d965d6f-b267-4baf-aa4d-9b681c499d96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.449473 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10830925-2ade-4408-982f-499d3921a45f-kube-api-access-m989t" (OuterVolumeSpecName: "kube-api-access-m989t") pod "10830925-2ade-4408-982f-499d3921a45f" (UID: "10830925-2ade-4408-982f-499d3921a45f"). InnerVolumeSpecName "kube-api-access-m989t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.454823 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d965d6f-b267-4baf-aa4d-9b681c499d96-kube-api-access-fblt8" (OuterVolumeSpecName: "kube-api-access-fblt8") pod "1d965d6f-b267-4baf-aa4d-9b681c499d96" (UID: "1d965d6f-b267-4baf-aa4d-9b681c499d96"). InnerVolumeSpecName "kube-api-access-fblt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.456742 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.461400 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.539837 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m989t\" (UniqueName: \"kubernetes.io/projected/10830925-2ade-4408-982f-499d3921a45f-kube-api-access-m989t\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.539867 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10830925-2ade-4408-982f-499d3921a45f-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.539894 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.539905 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d965d6f-b267-4baf-aa4d-9b681c499d96-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.539916 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fblt8\" (UniqueName: \"kubernetes.io/projected/1d965d6f-b267-4baf-aa4d-9b681c499d96-kube-api-access-fblt8\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.774754 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9cwqh"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.811637 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab69f1a2-78df-4097-a527-0b90345cdcfe","Type":"ContainerStarted","Data":"d35c27e6bae1249db1889e71323773e48e3ea042690f40f008866b12fe4d35ff"} Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.822862 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" event={"ID":"10830925-2ade-4408-982f-499d3921a45f","Type":"ContainerDied","Data":"4a26ccea37e27f24a2c074b0312652eaf642726e0ef050c71603ce9b0fea85f0"} Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.822896 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-h5vv9" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.825551 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.825544 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7v4xb" event={"ID":"1d965d6f-b267-4baf-aa4d-9b681c499d96","Type":"ContainerDied","Data":"ada85d3963f378a6dfc1c430e9b1e6b8667c3aa6ad2ad5c56e9e190fc70bea3f"} Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.826959 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"812fa799-d734-4151-b87f-25d638295714","Type":"ContainerStarted","Data":"8ddeb14ea9d43f8ce1a42ac22c9b7fe2e897d371a21d098cd1ac4669370682eb"} Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.860255 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gn6td"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.873476 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"71201562a586bb41b092fbbc0aed881de288c0da40461c0877afbe0f47cb3b45"} Dec 02 14:02:00 crc kubenswrapper[4900]: E1202 14:02:00.875140 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e410de46-b373-431a-8486-21a6f1268e41" Dec 02 14:02:00 crc kubenswrapper[4900]: E1202 14:02:00.876901 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8db82600-180c-4114-8006-551e1b566ce5" Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.879000 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.969230 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.984144 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h5vv9"] Dec 02 14:02:00 crc kubenswrapper[4900]: I1202 14:02:00.989386 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-h5vv9"] Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.003996 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7v4xb"] Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.011603 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7v4xb"] Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.517571 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.882436 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerStarted","Data":"c69a2ac94588ffbd4f09ec232f2e5d0bdddb93f20ea02ec34a5f1d1973fc1ec8"} Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.884082 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"096b1286-863b-44aa-ac7e-5cd509d99950","Type":"ContainerStarted","Data":"c8af75c8342908413c958c358029f92ea134aa7608aef6455d8f79d94bbe561e"} Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.885335 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d174e45-558a-4540-8ff2-65fbfb554be5","Type":"ContainerStarted","Data":"91fa9f38dec184672c5784a4f892d333267923543430141305b192c5d4bfd6f3"} Dec 02 14:02:01 crc kubenswrapper[4900]: I1202 14:02:01.888028 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td" event={"ID":"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d","Type":"ContainerStarted","Data":"ad5f539e71ab05eda14aa7b310a218ef88791d9bc44443f800451dc55c9259d7"} Dec 02 14:02:02 crc kubenswrapper[4900]: I1202 14:02:02.923748 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10830925-2ade-4408-982f-499d3921a45f" path="/var/lib/kubelet/pods/10830925-2ade-4408-982f-499d3921a45f/volumes" Dec 02 14:02:02 crc kubenswrapper[4900]: I1202 14:02:02.926434 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d965d6f-b267-4baf-aa4d-9b681c499d96" path="/var/lib/kubelet/pods/1d965d6f-b267-4baf-aa4d-9b681c499d96/volumes" Dec 02 14:02:03 crc kubenswrapper[4900]: W1202 14:02:03.021822 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc353c599_462c_4196_a35c_7622350bb349.slice/crio-fc28f1797409b2b80f3bcfbd0305eff7f5b9cecb3df5b8c82628029421a94fc8 WatchSource:0}: Error finding container fc28f1797409b2b80f3bcfbd0305eff7f5b9cecb3df5b8c82628029421a94fc8: Status 404 returned error can't find the container with id fc28f1797409b2b80f3bcfbd0305eff7f5b9cecb3df5b8c82628029421a94fc8 Dec 02 14:02:03 crc kubenswrapper[4900]: I1202 14:02:03.912211 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c353c599-462c-4196-a35c-7622350bb349","Type":"ContainerStarted","Data":"fc28f1797409b2b80f3bcfbd0305eff7f5b9cecb3df5b8c82628029421a94fc8"} Dec 02 14:02:04 crc kubenswrapper[4900]: I1202 14:02:04.925362 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"812fa799-d734-4151-b87f-25d638295714","Type":"ContainerStarted","Data":"32a45c77a6ef2f6050d3ead873d9d3fa8d6013c262c2306ca091361bfe251ace"} Dec 02 14:02:04 crc kubenswrapper[4900]: I1202 14:02:04.925636 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 02 14:02:04 crc kubenswrapper[4900]: I1202 14:02:04.927278 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79","Type":"ContainerStarted","Data":"8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed"} Dec 02 14:02:04 crc kubenswrapper[4900]: I1202 14:02:04.933045 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab69f1a2-78df-4097-a527-0b90345cdcfe","Type":"ContainerStarted","Data":"811c411eac126ccd19ce31de3b8fe1f7fbde7d7f6f765332d7fb6220e74d4d85"} Dec 02 14:02:05 crc kubenswrapper[4900]: I1202 14:02:05.083469 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.827669028 podStartE2EDuration="29.083450325s" podCreationTimestamp="2025-12-02 14:01:36 +0000 UTC" firstStartedPulling="2025-12-02 14:02:00.467414701 +0000 UTC m=+1165.883228552" lastFinishedPulling="2025-12-02 14:02:03.723195998 +0000 UTC m=+1169.139009849" observedRunningTime="2025-12-02 14:02:05.080597055 +0000 UTC m=+1170.496410906" watchObservedRunningTime="2025-12-02 14:02:05.083450325 +0000 UTC m=+1170.499264176" Dec 02 14:02:07 crc kubenswrapper[4900]: I1202 14:02:07.967951 4900 generic.go:334] "Generic (PLEG): container finished" podID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerID="8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed" exitCode=0 Dec 02 14:02:07 crc kubenswrapper[4900]: I1202 14:02:07.968755 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79","Type":"ContainerDied","Data":"8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed"} Dec 02 14:02:08 crc kubenswrapper[4900]: I1202 14:02:08.983233 4900 generic.go:334] "Generic (PLEG): container finished" podID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerID="811c411eac126ccd19ce31de3b8fe1f7fbde7d7f6f765332d7fb6220e74d4d85" exitCode=0 Dec 02 14:02:08 crc kubenswrapper[4900]: I1202 14:02:08.983293 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab69f1a2-78df-4097-a527-0b90345cdcfe","Type":"ContainerDied","Data":"811c411eac126ccd19ce31de3b8fe1f7fbde7d7f6f765332d7fb6220e74d4d85"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.001354 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c353c599-462c-4196-a35c-7622350bb349","Type":"ContainerStarted","Data":"757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.003114 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"096b1286-863b-44aa-ac7e-5cd509d99950","Type":"ContainerStarted","Data":"d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.004795 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d174e45-558a-4540-8ff2-65fbfb554be5","Type":"ContainerStarted","Data":"ec12c1067c3d80232f6837ff0d9be22301ba085b3f5a155a813302dd4bc96097"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.004938 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.006980 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79","Type":"ContainerStarted","Data":"30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.008614 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td" event={"ID":"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d","Type":"ContainerStarted","Data":"c8588296af791c00c99ca2cc1241929618a4f8fd2a651218322a322c131b0851"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.008769 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gn6td" Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.010680 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab69f1a2-78df-4097-a527-0b90345cdcfe","Type":"ContainerStarted","Data":"f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.011942 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerStarted","Data":"83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6"} Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.027393 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.1828425 podStartE2EDuration="32.027367651s" podCreationTimestamp="2025-12-02 14:01:39 +0000 UTC" firstStartedPulling="2025-12-02 14:02:00.869541905 +0000 UTC m=+1166.285355756" lastFinishedPulling="2025-12-02 14:02:10.714067056 +0000 UTC m=+1176.129880907" observedRunningTime="2025-12-02 14:02:11.020304213 +0000 UTC m=+1176.436118064" watchObservedRunningTime="2025-12-02 14:02:11.027367651 +0000 UTC m=+1176.443181502" Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.059110 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=32.798497589 podStartE2EDuration="36.059093991s" podCreationTimestamp="2025-12-02 14:01:35 +0000 UTC" firstStartedPulling="2025-12-02 14:02:00.46488423 +0000 UTC m=+1165.880698081" lastFinishedPulling="2025-12-02 14:02:03.725480622 +0000 UTC m=+1169.141294483" observedRunningTime="2025-12-02 14:02:11.055985594 +0000 UTC m=+1176.471799445" watchObservedRunningTime="2025-12-02 14:02:11.059093991 +0000 UTC m=+1176.474907842" Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.099724 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.073552826 podStartE2EDuration="37.09969752s" podCreationTimestamp="2025-12-02 14:01:34 +0000 UTC" firstStartedPulling="2025-12-02 14:01:42.69583622 +0000 UTC m=+1148.111650111" lastFinishedPulling="2025-12-02 14:02:03.721980914 +0000 UTC m=+1169.137794805" observedRunningTime="2025-12-02 14:02:11.095255595 +0000 UTC m=+1176.511069466" watchObservedRunningTime="2025-12-02 14:02:11.09969752 +0000 UTC m=+1176.515511391" Dec 02 14:02:11 crc kubenswrapper[4900]: I1202 14:02:11.107823 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gn6td" podStartSLOduration=20.432718504 podStartE2EDuration="29.107800817s" podCreationTimestamp="2025-12-02 14:01:42 +0000 UTC" firstStartedPulling="2025-12-02 14:02:00.869170925 +0000 UTC m=+1166.284984776" lastFinishedPulling="2025-12-02 14:02:09.544253198 +0000 UTC m=+1174.960067089" observedRunningTime="2025-12-02 14:02:11.07616632 +0000 UTC m=+1176.491980171" watchObservedRunningTime="2025-12-02 14:02:11.107800817 +0000 UTC m=+1176.523614668" Dec 02 14:02:12 crc kubenswrapper[4900]: I1202 14:02:12.033320 4900 generic.go:334] "Generic (PLEG): container finished" podID="f79247d6-28ab-4234-a191-8799418aa3ea" containerID="83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6" exitCode=0 Dec 02 14:02:12 crc kubenswrapper[4900]: I1202 14:02:12.033367 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerDied","Data":"83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6"} Dec 02 14:02:12 crc kubenswrapper[4900]: I1202 14:02:12.299783 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 02 14:02:13 crc kubenswrapper[4900]: I1202 14:02:13.047472 4900 generic.go:334] "Generic (PLEG): container finished" podID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerID="ed4781a463175664a1c374769c42fa9c7fd0353b0a63752c89fc13485f7ccb51" exitCode=0 Dec 02 14:02:13 crc kubenswrapper[4900]: I1202 14:02:13.047757 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" event={"ID":"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5","Type":"ContainerDied","Data":"ed4781a463175664a1c374769c42fa9c7fd0353b0a63752c89fc13485f7ccb51"} Dec 02 14:02:13 crc kubenswrapper[4900]: I1202 14:02:13.051331 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerStarted","Data":"2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27"} Dec 02 14:02:13 crc kubenswrapper[4900]: I1202 14:02:13.051377 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerStarted","Data":"231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305"} Dec 02 14:02:13 crc kubenswrapper[4900]: I1202 14:02:13.051499 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:02:13 crc kubenswrapper[4900]: I1202 14:02:13.087493 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9cwqh" podStartSLOduration=22.413757412 podStartE2EDuration="31.087479017s" podCreationTimestamp="2025-12-02 14:01:42 +0000 UTC" firstStartedPulling="2025-12-02 14:02:00.869682359 +0000 UTC m=+1166.285496210" lastFinishedPulling="2025-12-02 14:02:09.543403954 +0000 UTC m=+1174.959217815" observedRunningTime="2025-12-02 14:02:13.085275436 +0000 UTC m=+1178.501089287" watchObservedRunningTime="2025-12-02 14:02:13.087479017 +0000 UTC m=+1178.503292868" Dec 02 14:02:13 crc kubenswrapper[4900]: E1202 14:02:13.703107 4900 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.130:53508->38.102.83.130:46203: write tcp 38.102.83.130:53508->38.102.83.130:46203: write: broken pipe Dec 02 14:02:14 crc kubenswrapper[4900]: I1202 14:02:14.058685 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:02:15 crc kubenswrapper[4900]: I1202 14:02:15.495000 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 02 14:02:15 crc kubenswrapper[4900]: I1202 14:02:15.495352 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.074823 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c353c599-462c-4196-a35c-7622350bb349","Type":"ContainerStarted","Data":"a10158b3879ca4655fc8e6391c12e72686bdf7aae27551ad7cd381abaa366312"} Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.078192 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"096b1286-863b-44aa-ac7e-5cd509d99950","Type":"ContainerStarted","Data":"285f3512fc061d05a5061746933aff39280feb7e3c81097b9fc9c0b9cf0d32da"} Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.080150 4900 generic.go:334] "Generic (PLEG): container finished" podID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerID="e65c1fc1aa319f6e8d3c930ee0a40591f5e275474b6fcd8a912f52fc4d03633d" exitCode=0 Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.080236 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" event={"ID":"1806230e-09b1-43bf-9ee0-5cdedb5f89be","Type":"ContainerDied","Data":"e65c1fc1aa319f6e8d3c930ee0a40591f5e275474b6fcd8a912f52fc4d03633d"} Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.089265 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" event={"ID":"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5","Type":"ContainerStarted","Data":"177ec5749bd45d9155e6400073f0debc2cf9ae8dc8af851ca004f54d4343b63f"} Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.089493 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.117422 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.588432368 podStartE2EDuration="32.117390471s" podCreationTimestamp="2025-12-02 14:01:44 +0000 UTC" firstStartedPulling="2025-12-02 14:02:03.025549732 +0000 UTC m=+1168.441363593" lastFinishedPulling="2025-12-02 14:02:15.554507805 +0000 UTC m=+1180.970321696" observedRunningTime="2025-12-02 14:02:16.101413142 +0000 UTC m=+1181.517227003" watchObservedRunningTime="2025-12-02 14:02:16.117390471 +0000 UTC m=+1181.533204362" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.137965 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.603122517 podStartE2EDuration="31.137937097s" podCreationTimestamp="2025-12-02 14:01:45 +0000 UTC" firstStartedPulling="2025-12-02 14:02:00.984142537 +0000 UTC m=+1166.399956388" lastFinishedPulling="2025-12-02 14:02:15.518957077 +0000 UTC m=+1180.934770968" observedRunningTime="2025-12-02 14:02:16.13483415 +0000 UTC m=+1181.550648011" watchObservedRunningTime="2025-12-02 14:02:16.137937097 +0000 UTC m=+1181.553750988" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.164311 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" podStartSLOduration=5.144146078 podStartE2EDuration="44.164286166s" podCreationTimestamp="2025-12-02 14:01:32 +0000 UTC" firstStartedPulling="2025-12-02 14:01:33.495872963 +0000 UTC m=+1138.911686814" lastFinishedPulling="2025-12-02 14:02:12.516013051 +0000 UTC m=+1177.931826902" observedRunningTime="2025-12-02 14:02:16.151700703 +0000 UTC m=+1181.567514594" watchObservedRunningTime="2025-12-02 14:02:16.164286166 +0000 UTC m=+1181.580100027" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.516233 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.516445 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.564939 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.892548 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:16 crc kubenswrapper[4900]: I1202 14:02:16.892725 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.100049 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" event={"ID":"1806230e-09b1-43bf-9ee0-5cdedb5f89be","Type":"ContainerStarted","Data":"420447dd5eb236c872f176ebf409ec1ef879dd58083f307df999d095444aca04"} Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.100580 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.102322 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e410de46-b373-431a-8486-21a6f1268e41","Type":"ContainerStarted","Data":"5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168"} Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.132357 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" podStartSLOduration=-9223371991.72244 podStartE2EDuration="45.132334265s" podCreationTimestamp="2025-12-02 14:01:32 +0000 UTC" firstStartedPulling="2025-12-02 14:01:33.682336842 +0000 UTC m=+1139.098150693" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:02:17.126422809 +0000 UTC m=+1182.542236690" watchObservedRunningTime="2025-12-02 14:02:17.132334265 +0000 UTC m=+1182.548148146" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.135682 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.169339 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.499405 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lrc9k"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.527145 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2vng"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.528587 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.532061 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.544520 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2vng"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.612364 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-config\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.612456 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.612487 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v69j\" (UniqueName: \"kubernetes.io/projected/c0749ee9-282b-49e6-8980-442f11752092-kube-api-access-4v69j\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.612526 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.623947 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-g4flw"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.624941 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.631470 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.637850 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g4flw"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714465 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714522 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714546 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-combined-ca-bundle\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714570 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v69j\" (UniqueName: \"kubernetes.io/projected/c0749ee9-282b-49e6-8980-442f11752092-kube-api-access-4v69j\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714612 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b06684-2db9-4dca-aa15-53b22ca686d0-config\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714721 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovs-rundir\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714764 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714821 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-config\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.714995 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79pk\" (UniqueName: \"kubernetes.io/projected/22b06684-2db9-4dca-aa15-53b22ca686d0-kube-api-access-s79pk\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.715187 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovn-rundir\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.715850 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.715906 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-config\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.715918 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.732754 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v69j\" (UniqueName: \"kubernetes.io/projected/c0749ee9-282b-49e6-8980-442f11752092-kube-api-access-4v69j\") pod \"dnsmasq-dns-6bc7876d45-l2vng\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.799449 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.816791 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.816848 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-combined-ca-bundle\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.816885 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b06684-2db9-4dca-aa15-53b22ca686d0-config\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.816929 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovs-rundir\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.816979 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79pk\" (UniqueName: \"kubernetes.io/projected/22b06684-2db9-4dca-aa15-53b22ca686d0-kube-api-access-s79pk\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.817039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovn-rundir\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.817401 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovn-rundir\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.817489 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovs-rundir\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.818260 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b06684-2db9-4dca-aa15-53b22ca686d0-config\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.820609 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-combined-ca-bundle\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.821178 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.872501 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.885109 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79pk\" (UniqueName: \"kubernetes.io/projected/22b06684-2db9-4dca-aa15-53b22ca686d0-kube-api-access-s79pk\") pod \"ovn-controller-metrics-g4flw\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.896380 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5rkb"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.939637 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.953767 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-jsk7w"] Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.955550 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:17 crc kubenswrapper[4900]: I1202 14:02:17.965337 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.012704 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jsk7w"] Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.123609 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.130810 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.130897 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-config\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.130938 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv25m\" (UniqueName: \"kubernetes.io/projected/b280cf8d-be09-4643-b9fd-e444c63a0440-kube-api-access-sv25m\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.131016 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.131052 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-dns-svc\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.149951 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8db82600-180c-4114-8006-551e1b566ce5","Type":"ContainerStarted","Data":"1edf53c496618c33923bb60078803c42df40f981136a82e37805dfe6b475de7b"} Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.151354 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerName="dnsmasq-dns" containerID="cri-o://177ec5749bd45d9155e6400073f0debc2cf9ae8dc8af851ca004f54d4343b63f" gracePeriod=10 Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.232276 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.232333 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-dns-svc\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.232370 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.232434 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-config\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.232466 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv25m\" (UniqueName: \"kubernetes.io/projected/b280cf8d-be09-4643-b9fd-e444c63a0440-kube-api-access-sv25m\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.233610 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.234159 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-dns-svc\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.235046 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-config\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.236025 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.255427 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv25m\" (UniqueName: \"kubernetes.io/projected/b280cf8d-be09-4643-b9fd-e444c63a0440-kube-api-access-sv25m\") pod \"dnsmasq-dns-8554648995-jsk7w\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.283229 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.283565 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.542583 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g4flw"] Dec 02 14:02:18 crc kubenswrapper[4900]: W1202 14:02:18.549994 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22b06684_2db9_4dca_aa15_53b22ca686d0.slice/crio-b5fc4b67d130f36f9b5d1bc656d626882211ebcd6de60627d2ad0c0d8e430f97 WatchSource:0}: Error finding container b5fc4b67d130f36f9b5d1bc656d626882211ebcd6de60627d2ad0c0d8e430f97: Status 404 returned error can't find the container with id b5fc4b67d130f36f9b5d1bc656d626882211ebcd6de60627d2ad0c0d8e430f97 Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.551150 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jsk7w"] Dec 02 14:02:18 crc kubenswrapper[4900]: W1202 14:02:18.563161 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb280cf8d_be09_4643_b9fd_e444c63a0440.slice/crio-8ae7910e7b91c545c62c9cb9a115382dd9cd21e4e16484aaa6a1fcaa76fbb403 WatchSource:0}: Error finding container 8ae7910e7b91c545c62c9cb9a115382dd9cd21e4e16484aaa6a1fcaa76fbb403: Status 404 returned error can't find the container with id 8ae7910e7b91c545c62c9cb9a115382dd9cd21e4e16484aaa6a1fcaa76fbb403 Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.645011 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2vng"] Dec 02 14:02:18 crc kubenswrapper[4900]: W1202 14:02:18.651293 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0749ee9_282b_49e6_8980_442f11752092.slice/crio-980a0bcb6784f76d7c80f32bd397289e062f5bacd47d0278491a7a5a64028882 WatchSource:0}: Error finding container 980a0bcb6784f76d7c80f32bd397289e062f5bacd47d0278491a7a5a64028882: Status 404 returned error can't find the container with id 980a0bcb6784f76d7c80f32bd397289e062f5bacd47d0278491a7a5a64028882 Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.887737 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:18 crc kubenswrapper[4900]: I1202 14:02:18.947222 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.162236 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" event={"ID":"c0749ee9-282b-49e6-8980-442f11752092","Type":"ContainerStarted","Data":"980a0bcb6784f76d7c80f32bd397289e062f5bacd47d0278491a7a5a64028882"} Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.164248 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jsk7w" event={"ID":"b280cf8d-be09-4643-b9fd-e444c63a0440","Type":"ContainerStarted","Data":"8ae7910e7b91c545c62c9cb9a115382dd9cd21e4e16484aaa6a1fcaa76fbb403"} Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.166318 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g4flw" event={"ID":"22b06684-2db9-4dca-aa15-53b22ca686d0","Type":"ContainerStarted","Data":"b5fc4b67d130f36f9b5d1bc656d626882211ebcd6de60627d2ad0c0d8e430f97"} Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.168184 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerName="dnsmasq-dns" containerID="cri-o://420447dd5eb236c872f176ebf409ec1ef879dd58083f307df999d095444aca04" gracePeriod=10 Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.168493 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.250307 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.505700 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.507172 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.511005 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kkwkx" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.511571 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.511732 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.511851 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.540868 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562298 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562412 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzfl\" (UniqueName: \"kubernetes.io/projected/9c17cf84-2174-42d8-880a-9a643a161ef4-kube-api-access-vwzfl\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562568 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562621 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-config\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562680 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562703 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.562728 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-scripts\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.603342 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2vng"] Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.616964 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6rvcj"] Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.618632 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.628077 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6rvcj"] Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.661148 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664377 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664423 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-config\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664470 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664488 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg45d\" (UniqueName: \"kubernetes.io/projected/a56b8ea0-ea80-4320-927c-14e52f803593-kube-api-access-jg45d\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664515 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwzfl\" (UniqueName: \"kubernetes.io/projected/9c17cf84-2174-42d8-880a-9a643a161ef4-kube-api-access-vwzfl\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664537 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664589 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-config\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664609 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664626 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664652 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.664667 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-scripts\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.665467 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-scripts\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.666262 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-config\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.668565 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.676716 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.680781 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.683716 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwzfl\" (UniqueName: \"kubernetes.io/projected/9c17cf84-2174-42d8-880a-9a643a161ef4-kube-api-access-vwzfl\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.695190 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.766459 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-config\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.766542 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg45d\" (UniqueName: \"kubernetes.io/projected/a56b8ea0-ea80-4320-927c-14e52f803593-kube-api-access-jg45d\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.766579 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.766621 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.766676 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.767329 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-config\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.767377 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.767912 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.768328 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.783209 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg45d\" (UniqueName: \"kubernetes.io/projected/a56b8ea0-ea80-4320-927c-14e52f803593-kube-api-access-jg45d\") pod \"dnsmasq-dns-b8fbc5445-6rvcj\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.853202 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 14:02:19 crc kubenswrapper[4900]: I1202 14:02:19.946224 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.104043 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:02:20 crc kubenswrapper[4900]: W1202 14:02:20.110972 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c17cf84_2174_42d8_880a_9a643a161ef4.slice/crio-0af10bb990a0a635a2e90374c0fdc9fc55b31873db85dd5fee1fdb78f8a57303 WatchSource:0}: Error finding container 0af10bb990a0a635a2e90374c0fdc9fc55b31873db85dd5fee1fdb78f8a57303: Status 404 returned error can't find the container with id 0af10bb990a0a635a2e90374c0fdc9fc55b31873db85dd5fee1fdb78f8a57303 Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.174157 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c17cf84-2174-42d8-880a-9a643a161ef4","Type":"ContainerStarted","Data":"0af10bb990a0a635a2e90374c0fdc9fc55b31873db85dd5fee1fdb78f8a57303"} Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.416933 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6rvcj"] Dec 02 14:02:20 crc kubenswrapper[4900]: W1202 14:02:20.417800 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda56b8ea0_ea80_4320_927c_14e52f803593.slice/crio-51449dbe615d367a74c7d0663604c130899b40be6456391f513a737ea73e6e2e WatchSource:0}: Error finding container 51449dbe615d367a74c7d0663604c130899b40be6456391f513a737ea73e6e2e: Status 404 returned error can't find the container with id 51449dbe615d367a74c7d0663604c130899b40be6456391f513a737ea73e6e2e Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.653440 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.665954 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.669740 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.669972 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.670093 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.670464 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-l6g6b" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.696627 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.782320 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.782511 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxjx\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-kube-api-access-qwxjx\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.782600 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.783346 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-lock\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.783437 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-cache\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.885046 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.885161 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwxjx\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-kube-api-access-qwxjx\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.885228 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.885271 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-lock\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.885324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-cache\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.885499 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: E1202 14:02:20.886194 4900 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:02:20 crc kubenswrapper[4900]: E1202 14:02:20.886224 4900 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:02:20 crc kubenswrapper[4900]: E1202 14:02:20.886266 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift podName:305da939-8e7b-4fce-95f9-95d90218a1f0 nodeName:}" failed. No retries permitted until 2025-12-02 14:02:21.386250603 +0000 UTC m=+1186.802064454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift") pod "swift-storage-0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0") : configmap "swift-ring-files" not found Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.886388 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-cache\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.886405 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-lock\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.909716 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwxjx\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-kube-api-access-qwxjx\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:20 crc kubenswrapper[4900]: I1202 14:02:20.917399 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.193683 4900 generic.go:334] "Generic (PLEG): container finished" podID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerID="177ec5749bd45d9155e6400073f0debc2cf9ae8dc8af851ca004f54d4343b63f" exitCode=0 Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.193948 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" event={"ID":"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5","Type":"ContainerDied","Data":"177ec5749bd45d9155e6400073f0debc2cf9ae8dc8af851ca004f54d4343b63f"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.200071 4900 generic.go:334] "Generic (PLEG): container finished" podID="c0749ee9-282b-49e6-8980-442f11752092" containerID="4dd892c398d6bef54fbd9170d8d84245a89b0cf3173bfe30fa33d1c57ad7b7a8" exitCode=0 Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.200112 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" event={"ID":"c0749ee9-282b-49e6-8980-442f11752092","Type":"ContainerDied","Data":"4dd892c398d6bef54fbd9170d8d84245a89b0cf3173bfe30fa33d1c57ad7b7a8"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.204914 4900 generic.go:334] "Generic (PLEG): container finished" podID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerID="42cec345ec4ad876d44e4fff9e709d51ebfb2245f09239348d617a61574255bf" exitCode=0 Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.204961 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jsk7w" event={"ID":"b280cf8d-be09-4643-b9fd-e444c63a0440","Type":"ContainerDied","Data":"42cec345ec4ad876d44e4fff9e709d51ebfb2245f09239348d617a61574255bf"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.211169 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g4flw" event={"ID":"22b06684-2db9-4dca-aa15-53b22ca686d0","Type":"ContainerStarted","Data":"19705f019e43eb0fec10afe7795a2b153f6e8f761831faa811bf6940dbd55294"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.216013 4900 generic.go:334] "Generic (PLEG): container finished" podID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerID="420447dd5eb236c872f176ebf409ec1ef879dd58083f307df999d095444aca04" exitCode=0 Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.216110 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" event={"ID":"1806230e-09b1-43bf-9ee0-5cdedb5f89be","Type":"ContainerDied","Data":"420447dd5eb236c872f176ebf409ec1ef879dd58083f307df999d095444aca04"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.220759 4900 generic.go:334] "Generic (PLEG): container finished" podID="a56b8ea0-ea80-4320-927c-14e52f803593" containerID="8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad" exitCode=0 Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.220856 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" event={"ID":"a56b8ea0-ea80-4320-927c-14e52f803593","Type":"ContainerDied","Data":"8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.220892 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" event={"ID":"a56b8ea0-ea80-4320-927c-14e52f803593","Type":"ContainerStarted","Data":"51449dbe615d367a74c7d0663604c130899b40be6456391f513a737ea73e6e2e"} Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.331052 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-g4flw" podStartSLOduration=4.331028577 podStartE2EDuration="4.331028577s" podCreationTimestamp="2025-12-02 14:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:02:21.272510956 +0000 UTC m=+1186.688324807" watchObservedRunningTime="2025-12-02 14:02:21.331028577 +0000 UTC m=+1186.746842428" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.390556 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.397251 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:21 crc kubenswrapper[4900]: E1202 14:02:21.401779 4900 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:02:21 crc kubenswrapper[4900]: E1202 14:02:21.401796 4900 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:02:21 crc kubenswrapper[4900]: E1202 14:02:21.401829 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift podName:305da939-8e7b-4fce-95f9-95d90218a1f0 nodeName:}" failed. No retries permitted until 2025-12-02 14:02:22.401817172 +0000 UTC m=+1187.817631023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift") pod "swift-storage-0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0") : configmap "swift-ring-files" not found Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.498662 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nscr6\" (UniqueName: \"kubernetes.io/projected/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-kube-api-access-nscr6\") pod \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.499062 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-dns-svc\") pod \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.499110 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-config\") pod \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\" (UID: \"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.511327 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.556691 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-kube-api-access-nscr6" (OuterVolumeSpecName: "kube-api-access-nscr6") pod "c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" (UID: "c9e1c67d-fef0-47ac-8ead-d02c015fb6f5"). InnerVolumeSpecName "kube-api-access-nscr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.601488 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nscr6\" (UniqueName: \"kubernetes.io/projected/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-kube-api-access-nscr6\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.622260 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.636530 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-config" (OuterVolumeSpecName: "config") pod "c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" (UID: "c9e1c67d-fef0-47ac-8ead-d02c015fb6f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.637319 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" (UID: "c9e1c67d-fef0-47ac-8ead-d02c015fb6f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: E1202 14:02:21.661688 4900 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 02 14:02:21 crc kubenswrapper[4900]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b280cf8d-be09-4643-b9fd-e444c63a0440/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 02 14:02:21 crc kubenswrapper[4900]: > podSandboxID="8ae7910e7b91c545c62c9cb9a115382dd9cd21e4e16484aaa6a1fcaa76fbb403" Dec 02 14:02:21 crc kubenswrapper[4900]: E1202 14:02:21.661971 4900 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 02 14:02:21 crc kubenswrapper[4900]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv25m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8554648995-jsk7w_openstack(b280cf8d-be09-4643-b9fd-e444c63a0440): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b280cf8d-be09-4643-b9fd-e444c63a0440/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 02 14:02:21 crc kubenswrapper[4900]: > logger="UnhandledError" Dec 02 14:02:21 crc kubenswrapper[4900]: E1202 14:02:21.663908 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b280cf8d-be09-4643-b9fd-e444c63a0440/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8554648995-jsk7w" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.702336 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-dns-svc\") pod \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.702465 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s767m\" (UniqueName: \"kubernetes.io/projected/1806230e-09b1-43bf-9ee0-5cdedb5f89be-kube-api-access-s767m\") pod \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.702587 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-config\") pod \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\" (UID: \"1806230e-09b1-43bf-9ee0-5cdedb5f89be\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.702986 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.703003 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.707039 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1806230e-09b1-43bf-9ee0-5cdedb5f89be-kube-api-access-s767m" (OuterVolumeSpecName: "kube-api-access-s767m") pod "1806230e-09b1-43bf-9ee0-5cdedb5f89be" (UID: "1806230e-09b1-43bf-9ee0-5cdedb5f89be"). InnerVolumeSpecName "kube-api-access-s767m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.767847 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1806230e-09b1-43bf-9ee0-5cdedb5f89be" (UID: "1806230e-09b1-43bf-9ee0-5cdedb5f89be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.774271 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-config" (OuterVolumeSpecName: "config") pod "1806230e-09b1-43bf-9ee0-5cdedb5f89be" (UID: "1806230e-09b1-43bf-9ee0-5cdedb5f89be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.805496 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-dns-svc\") pod \"c0749ee9-282b-49e6-8980-442f11752092\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.805589 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-ovsdbserver-sb\") pod \"c0749ee9-282b-49e6-8980-442f11752092\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.805698 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v69j\" (UniqueName: \"kubernetes.io/projected/c0749ee9-282b-49e6-8980-442f11752092-kube-api-access-4v69j\") pod \"c0749ee9-282b-49e6-8980-442f11752092\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.807344 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-config\") pod \"c0749ee9-282b-49e6-8980-442f11752092\" (UID: \"c0749ee9-282b-49e6-8980-442f11752092\") " Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.807948 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.807968 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s767m\" (UniqueName: \"kubernetes.io/projected/1806230e-09b1-43bf-9ee0-5cdedb5f89be-kube-api-access-s767m\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.807980 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1806230e-09b1-43bf-9ee0-5cdedb5f89be-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.808866 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0749ee9-282b-49e6-8980-442f11752092-kube-api-access-4v69j" (OuterVolumeSpecName: "kube-api-access-4v69j") pod "c0749ee9-282b-49e6-8980-442f11752092" (UID: "c0749ee9-282b-49e6-8980-442f11752092"). InnerVolumeSpecName "kube-api-access-4v69j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.837290 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0749ee9-282b-49e6-8980-442f11752092" (UID: "c0749ee9-282b-49e6-8980-442f11752092"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.844076 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-config" (OuterVolumeSpecName: "config") pod "c0749ee9-282b-49e6-8980-442f11752092" (UID: "c0749ee9-282b-49e6-8980-442f11752092"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.854982 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0749ee9-282b-49e6-8980-442f11752092" (UID: "c0749ee9-282b-49e6-8980-442f11752092"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.909537 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.909569 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v69j\" (UniqueName: \"kubernetes.io/projected/c0749ee9-282b-49e6-8980-442f11752092-kube-api-access-4v69j\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.909583 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:21 crc kubenswrapper[4900]: I1202 14:02:21.909592 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0749ee9-282b-49e6-8980-442f11752092-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.234202 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" event={"ID":"a56b8ea0-ea80-4320-927c-14e52f803593","Type":"ContainerStarted","Data":"cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e"} Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.235202 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.240902 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.240902 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-l5rkb" event={"ID":"c9e1c67d-fef0-47ac-8ead-d02c015fb6f5","Type":"ContainerDied","Data":"68737e3bc3cbc4c07fbac2a94f97901ccfd52d36e56f522e4da2f3664b28249a"} Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.241172 4900 scope.go:117] "RemoveContainer" containerID="177ec5749bd45d9155e6400073f0debc2cf9ae8dc8af851ca004f54d4343b63f" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.254397 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" event={"ID":"c0749ee9-282b-49e6-8980-442f11752092","Type":"ContainerDied","Data":"980a0bcb6784f76d7c80f32bd397289e062f5bacd47d0278491a7a5a64028882"} Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.254518 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-l2vng" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.263187 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" podStartSLOduration=3.263163298 podStartE2EDuration="3.263163298s" podCreationTimestamp="2025-12-02 14:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:02:22.262520579 +0000 UTC m=+1187.678334430" watchObservedRunningTime="2025-12-02 14:02:22.263163298 +0000 UTC m=+1187.678977159" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.269382 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.269822 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lrc9k" event={"ID":"1806230e-09b1-43bf-9ee0-5cdedb5f89be","Type":"ContainerDied","Data":"0e696dec16deafd2f2340ecfa518edf301dcc0fe2b767bdff5bf31b8fe4a087d"} Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.319581 4900 scope.go:117] "RemoveContainer" containerID="ed4781a463175664a1c374769c42fa9c7fd0353b0a63752c89fc13485f7ccb51" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.362355 4900 scope.go:117] "RemoveContainer" containerID="4dd892c398d6bef54fbd9170d8d84245a89b0cf3173bfe30fa33d1c57ad7b7a8" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.383588 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2vng"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.391404 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-l2vng"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.399837 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5rkb"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.407708 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-l5rkb"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.409681 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lrc9k"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.409826 4900 scope.go:117] "RemoveContainer" containerID="420447dd5eb236c872f176ebf409ec1ef879dd58083f307df999d095444aca04" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.416784 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lrc9k"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.418513 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.419231 4900 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.419250 4900 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.419299 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift podName:305da939-8e7b-4fce-95f9-95d90218a1f0 nodeName:}" failed. No retries permitted until 2025-12-02 14:02:24.419281696 +0000 UTC m=+1189.835095547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift") pod "swift-storage-0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0") : configmap "swift-ring-files" not found Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.478858 4900 scope.go:117] "RemoveContainer" containerID="e65c1fc1aa319f6e8d3c930ee0a40591f5e275474b6fcd8a912f52fc4d03633d" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572418 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0080-account-create-update-7t6ck"] Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.572728 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerName="dnsmasq-dns" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572742 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerName="dnsmasq-dns" Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.572762 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0749ee9-282b-49e6-8980-442f11752092" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572768 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0749ee9-282b-49e6-8980-442f11752092" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.572783 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerName="dnsmasq-dns" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572789 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerName="dnsmasq-dns" Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.572797 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572802 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: E1202 14:02:22.572813 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572819 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572973 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0749ee9-282b-49e6-8980-442f11752092" containerName="init" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572985 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" containerName="dnsmasq-dns" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.572998 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" containerName="dnsmasq-dns" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.573505 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.575550 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.604185 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0080-account-create-update-7t6ck"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.664334 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mg867"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.665371 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.678122 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mg867"] Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.723917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ede90d7-42d3-40d8-aebd-f8be400d967c-operator-scripts\") pod \"glance-0080-account-create-update-7t6ck\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.723955 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch857\" (UniqueName: \"kubernetes.io/projected/3ede90d7-42d3-40d8-aebd-f8be400d967c-kube-api-access-ch857\") pod \"glance-0080-account-create-update-7t6ck\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.825433 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgs94\" (UniqueName: \"kubernetes.io/projected/58a5b4c0-6620-4f5a-a17a-bc792435afac-kube-api-access-zgs94\") pod \"glance-db-create-mg867\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.825720 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ede90d7-42d3-40d8-aebd-f8be400d967c-operator-scripts\") pod \"glance-0080-account-create-update-7t6ck\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.825761 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch857\" (UniqueName: \"kubernetes.io/projected/3ede90d7-42d3-40d8-aebd-f8be400d967c-kube-api-access-ch857\") pod \"glance-0080-account-create-update-7t6ck\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.825821 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a5b4c0-6620-4f5a-a17a-bc792435afac-operator-scripts\") pod \"glance-db-create-mg867\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.827680 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ede90d7-42d3-40d8-aebd-f8be400d967c-operator-scripts\") pod \"glance-0080-account-create-update-7t6ck\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.847846 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch857\" (UniqueName: \"kubernetes.io/projected/3ede90d7-42d3-40d8-aebd-f8be400d967c-kube-api-access-ch857\") pod \"glance-0080-account-create-update-7t6ck\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.905455 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.927455 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1806230e-09b1-43bf-9ee0-5cdedb5f89be" path="/var/lib/kubelet/pods/1806230e-09b1-43bf-9ee0-5cdedb5f89be/volumes" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.928306 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0749ee9-282b-49e6-8980-442f11752092" path="/var/lib/kubelet/pods/c0749ee9-282b-49e6-8980-442f11752092/volumes" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.928818 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgs94\" (UniqueName: \"kubernetes.io/projected/58a5b4c0-6620-4f5a-a17a-bc792435afac-kube-api-access-zgs94\") pod \"glance-db-create-mg867\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.928966 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e1c67d-fef0-47ac-8ead-d02c015fb6f5" path="/var/lib/kubelet/pods/c9e1c67d-fef0-47ac-8ead-d02c015fb6f5/volumes" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.928985 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a5b4c0-6620-4f5a-a17a-bc792435afac-operator-scripts\") pod \"glance-db-create-mg867\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.931762 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a5b4c0-6620-4f5a-a17a-bc792435afac-operator-scripts\") pod \"glance-db-create-mg867\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.952694 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgs94\" (UniqueName: \"kubernetes.io/projected/58a5b4c0-6620-4f5a-a17a-bc792435afac-kube-api-access-zgs94\") pod \"glance-db-create-mg867\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " pod="openstack/glance-db-create-mg867" Dec 02 14:02:22 crc kubenswrapper[4900]: I1202 14:02:22.983863 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mg867" Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.283432 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jsk7w" event={"ID":"b280cf8d-be09-4643-b9fd-e444c63a0440","Type":"ContainerStarted","Data":"168658c3fbd41c7f4d7621ce8f6505e187b6bc6f7fe596a226eb3f34aa4afd32"} Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.284623 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.289897 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c17cf84-2174-42d8-880a-9a643a161ef4","Type":"ContainerStarted","Data":"21b9a43c02558258bc5549999999ff72f00f4644cbc3a254387cc0fa7154a8d5"} Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.289932 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c17cf84-2174-42d8-880a-9a643a161ef4","Type":"ContainerStarted","Data":"9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573"} Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.290339 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.314727 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-jsk7w" podStartSLOduration=6.314705218 podStartE2EDuration="6.314705218s" podCreationTimestamp="2025-12-02 14:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:02:23.304565364 +0000 UTC m=+1188.720379225" watchObservedRunningTime="2025-12-02 14:02:23.314705218 +0000 UTC m=+1188.730519089" Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.342620 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.284958174 podStartE2EDuration="4.34259976s" podCreationTimestamp="2025-12-02 14:02:19 +0000 UTC" firstStartedPulling="2025-12-02 14:02:20.114032366 +0000 UTC m=+1185.529846217" lastFinishedPulling="2025-12-02 14:02:22.171673942 +0000 UTC m=+1187.587487803" observedRunningTime="2025-12-02 14:02:23.337200239 +0000 UTC m=+1188.753014090" watchObservedRunningTime="2025-12-02 14:02:23.34259976 +0000 UTC m=+1188.758413611" Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.381059 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0080-account-create-update-7t6ck"] Dec 02 14:02:23 crc kubenswrapper[4900]: W1202 14:02:23.384011 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ede90d7_42d3_40d8_aebd_f8be400d967c.slice/crio-27cab59b3258089867d1ba0f57c57fe4c14f32cae34026955bb916c02495c362 WatchSource:0}: Error finding container 27cab59b3258089867d1ba0f57c57fe4c14f32cae34026955bb916c02495c362: Status 404 returned error can't find the container with id 27cab59b3258089867d1ba0f57c57fe4c14f32cae34026955bb916c02495c362 Dec 02 14:02:23 crc kubenswrapper[4900]: I1202 14:02:23.502103 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mg867"] Dec 02 14:02:23 crc kubenswrapper[4900]: W1202 14:02:23.517937 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a5b4c0_6620_4f5a_a17a_bc792435afac.slice/crio-d107114962380e99fe0a886950e85227581cf8ae6f30474cacf41a27a81a1ade WatchSource:0}: Error finding container d107114962380e99fe0a886950e85227581cf8ae6f30474cacf41a27a81a1ade: Status 404 returned error can't find the container with id d107114962380e99fe0a886950e85227581cf8ae6f30474cacf41a27a81a1ade Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.300227 4900 generic.go:334] "Generic (PLEG): container finished" podID="58a5b4c0-6620-4f5a-a17a-bc792435afac" containerID="cbb6d068dcd832de001c1d6ffc2cef4ef552752c3ab552a513c3bee122eaee7c" exitCode=0 Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.300351 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mg867" event={"ID":"58a5b4c0-6620-4f5a-a17a-bc792435afac","Type":"ContainerDied","Data":"cbb6d068dcd832de001c1d6ffc2cef4ef552752c3ab552a513c3bee122eaee7c"} Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.300724 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mg867" event={"ID":"58a5b4c0-6620-4f5a-a17a-bc792435afac","Type":"ContainerStarted","Data":"d107114962380e99fe0a886950e85227581cf8ae6f30474cacf41a27a81a1ade"} Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.302777 4900 generic.go:334] "Generic (PLEG): container finished" podID="3ede90d7-42d3-40d8-aebd-f8be400d967c" containerID="0c0293081205009aa4d4b75d64d6c80b5931991b50a6c6d78594b61086ccc082" exitCode=0 Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.302922 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0080-account-create-update-7t6ck" event={"ID":"3ede90d7-42d3-40d8-aebd-f8be400d967c","Type":"ContainerDied","Data":"0c0293081205009aa4d4b75d64d6c80b5931991b50a6c6d78594b61086ccc082"} Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.302986 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0080-account-create-update-7t6ck" event={"ID":"3ede90d7-42d3-40d8-aebd-f8be400d967c","Type":"ContainerStarted","Data":"27cab59b3258089867d1ba0f57c57fe4c14f32cae34026955bb916c02495c362"} Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.461942 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:24 crc kubenswrapper[4900]: E1202 14:02:24.462950 4900 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:02:24 crc kubenswrapper[4900]: E1202 14:02:24.462997 4900 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:02:24 crc kubenswrapper[4900]: E1202 14:02:24.463139 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift podName:305da939-8e7b-4fce-95f9-95d90218a1f0 nodeName:}" failed. No retries permitted until 2025-12-02 14:02:28.463119636 +0000 UTC m=+1193.878933497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift") pod "swift-storage-0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0") : configmap "swift-ring-files" not found Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.799693 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wt5sd"] Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.801470 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.804075 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.804714 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.805715 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.814602 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wt5sd"] Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974374 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea1df1b2-175e-4695-a514-0378d69d38f9-etc-swift\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974439 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-dispersionconf\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974469 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-ring-data-devices\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974521 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-swiftconf\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974540 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-scripts\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974568 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-combined-ca-bundle\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:24 crc kubenswrapper[4900]: I1202 14:02:24.974676 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9chk\" (UniqueName: \"kubernetes.io/projected/ea1df1b2-175e-4695-a514-0378d69d38f9-kube-api-access-c9chk\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.076915 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea1df1b2-175e-4695-a514-0378d69d38f9-etc-swift\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.077059 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-dispersionconf\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.077128 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-ring-data-devices\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.077190 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-swiftconf\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.077226 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-scripts\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.077278 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-combined-ca-bundle\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.077358 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9chk\" (UniqueName: \"kubernetes.io/projected/ea1df1b2-175e-4695-a514-0378d69d38f9-kube-api-access-c9chk\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.079837 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea1df1b2-175e-4695-a514-0378d69d38f9-etc-swift\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.081535 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-ring-data-devices\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.082035 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-scripts\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.087560 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-combined-ca-bundle\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.089512 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-swiftconf\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.108402 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-dispersionconf\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.111190 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9chk\" (UniqueName: \"kubernetes.io/projected/ea1df1b2-175e-4695-a514-0378d69d38f9-kube-api-access-c9chk\") pod \"swift-ring-rebalance-wt5sd\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.126595 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:25 crc kubenswrapper[4900]: W1202 14:02:25.659512 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea1df1b2_175e_4695_a514_0378d69d38f9.slice/crio-1183477ee386fbdde99ba893b30ff034cfe3619272d0b4f834d4cae14c08a03b WatchSource:0}: Error finding container 1183477ee386fbdde99ba893b30ff034cfe3619272d0b4f834d4cae14c08a03b: Status 404 returned error can't find the container with id 1183477ee386fbdde99ba893b30ff034cfe3619272d0b4f834d4cae14c08a03b Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.670494 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wt5sd"] Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.715556 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.720767 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mg867" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.889445 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch857\" (UniqueName: \"kubernetes.io/projected/3ede90d7-42d3-40d8-aebd-f8be400d967c-kube-api-access-ch857\") pod \"3ede90d7-42d3-40d8-aebd-f8be400d967c\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.889825 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ede90d7-42d3-40d8-aebd-f8be400d967c-operator-scripts\") pod \"3ede90d7-42d3-40d8-aebd-f8be400d967c\" (UID: \"3ede90d7-42d3-40d8-aebd-f8be400d967c\") " Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.890017 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgs94\" (UniqueName: \"kubernetes.io/projected/58a5b4c0-6620-4f5a-a17a-bc792435afac-kube-api-access-zgs94\") pod \"58a5b4c0-6620-4f5a-a17a-bc792435afac\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.890186 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a5b4c0-6620-4f5a-a17a-bc792435afac-operator-scripts\") pod \"58a5b4c0-6620-4f5a-a17a-bc792435afac\" (UID: \"58a5b4c0-6620-4f5a-a17a-bc792435afac\") " Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.890201 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ede90d7-42d3-40d8-aebd-f8be400d967c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ede90d7-42d3-40d8-aebd-f8be400d967c" (UID: "3ede90d7-42d3-40d8-aebd-f8be400d967c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.890514 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a5b4c0-6620-4f5a-a17a-bc792435afac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58a5b4c0-6620-4f5a-a17a-bc792435afac" (UID: "58a5b4c0-6620-4f5a-a17a-bc792435afac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.891005 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a5b4c0-6620-4f5a-a17a-bc792435afac-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.891098 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ede90d7-42d3-40d8-aebd-f8be400d967c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.895910 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a5b4c0-6620-4f5a-a17a-bc792435afac-kube-api-access-zgs94" (OuterVolumeSpecName: "kube-api-access-zgs94") pod "58a5b4c0-6620-4f5a-a17a-bc792435afac" (UID: "58a5b4c0-6620-4f5a-a17a-bc792435afac"). InnerVolumeSpecName "kube-api-access-zgs94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.896466 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ede90d7-42d3-40d8-aebd-f8be400d967c-kube-api-access-ch857" (OuterVolumeSpecName: "kube-api-access-ch857") pod "3ede90d7-42d3-40d8-aebd-f8be400d967c" (UID: "3ede90d7-42d3-40d8-aebd-f8be400d967c"). InnerVolumeSpecName "kube-api-access-ch857". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.992924 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgs94\" (UniqueName: \"kubernetes.io/projected/58a5b4c0-6620-4f5a-a17a-bc792435afac-kube-api-access-zgs94\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:25 crc kubenswrapper[4900]: I1202 14:02:25.992990 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch857\" (UniqueName: \"kubernetes.io/projected/3ede90d7-42d3-40d8-aebd-f8be400d967c-kube-api-access-ch857\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.323586 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wt5sd" event={"ID":"ea1df1b2-175e-4695-a514-0378d69d38f9","Type":"ContainerStarted","Data":"1183477ee386fbdde99ba893b30ff034cfe3619272d0b4f834d4cae14c08a03b"} Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.325407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0080-account-create-update-7t6ck" event={"ID":"3ede90d7-42d3-40d8-aebd-f8be400d967c","Type":"ContainerDied","Data":"27cab59b3258089867d1ba0f57c57fe4c14f32cae34026955bb916c02495c362"} Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.325446 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27cab59b3258089867d1ba0f57c57fe4c14f32cae34026955bb916c02495c362" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.325517 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0080-account-create-update-7t6ck" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.338132 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mg867" event={"ID":"58a5b4c0-6620-4f5a-a17a-bc792435afac","Type":"ContainerDied","Data":"d107114962380e99fe0a886950e85227581cf8ae6f30474cacf41a27a81a1ade"} Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.338162 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d107114962380e99fe0a886950e85227581cf8ae6f30474cacf41a27a81a1ade" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.338211 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mg867" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.909154 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ngxzq"] Dec 02 14:02:26 crc kubenswrapper[4900]: E1202 14:02:26.910550 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ede90d7-42d3-40d8-aebd-f8be400d967c" containerName="mariadb-account-create-update" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.910636 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ede90d7-42d3-40d8-aebd-f8be400d967c" containerName="mariadb-account-create-update" Dec 02 14:02:26 crc kubenswrapper[4900]: E1202 14:02:26.910752 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a5b4c0-6620-4f5a-a17a-bc792435afac" containerName="mariadb-database-create" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.910805 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a5b4c0-6620-4f5a-a17a-bc792435afac" containerName="mariadb-database-create" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.911078 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a5b4c0-6620-4f5a-a17a-bc792435afac" containerName="mariadb-database-create" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.911145 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ede90d7-42d3-40d8-aebd-f8be400d967c" containerName="mariadb-account-create-update" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.911951 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.933961 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ngxzq"] Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.983490 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f5f9-account-create-update-29pcz"] Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.984433 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.986200 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 02 14:02:26 crc kubenswrapper[4900]: I1202 14:02:26.997372 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5f9-account-create-update-29pcz"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.010916 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c7373-c9f5-4957-abaa-e2719e654d2b-operator-scripts\") pod \"keystone-db-create-ngxzq\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.011020 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjcfn\" (UniqueName: \"kubernetes.io/projected/fa3c7373-c9f5-4957-abaa-e2719e654d2b-kube-api-access-hjcfn\") pod \"keystone-db-create-ngxzq\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.112516 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhwb5\" (UniqueName: \"kubernetes.io/projected/e2e7b745-29c6-452c-b9ff-392b476fddd1-kube-api-access-vhwb5\") pod \"keystone-f5f9-account-create-update-29pcz\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.112690 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c7373-c9f5-4957-abaa-e2719e654d2b-operator-scripts\") pod \"keystone-db-create-ngxzq\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.112746 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e7b745-29c6-452c-b9ff-392b476fddd1-operator-scripts\") pod \"keystone-f5f9-account-create-update-29pcz\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.112808 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjcfn\" (UniqueName: \"kubernetes.io/projected/fa3c7373-c9f5-4957-abaa-e2719e654d2b-kube-api-access-hjcfn\") pod \"keystone-db-create-ngxzq\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.113782 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c7373-c9f5-4957-abaa-e2719e654d2b-operator-scripts\") pod \"keystone-db-create-ngxzq\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.138297 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjcfn\" (UniqueName: \"kubernetes.io/projected/fa3c7373-c9f5-4957-abaa-e2719e654d2b-kube-api-access-hjcfn\") pod \"keystone-db-create-ngxzq\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.190376 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lwsqf"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.192115 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.200992 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lwsqf"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.214518 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e7b745-29c6-452c-b9ff-392b476fddd1-operator-scripts\") pod \"keystone-f5f9-account-create-update-29pcz\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.214625 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhwb5\" (UniqueName: \"kubernetes.io/projected/e2e7b745-29c6-452c-b9ff-392b476fddd1-kube-api-access-vhwb5\") pod \"keystone-f5f9-account-create-update-29pcz\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.215770 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e7b745-29c6-452c-b9ff-392b476fddd1-operator-scripts\") pod \"keystone-f5f9-account-create-update-29pcz\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.230116 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhwb5\" (UniqueName: \"kubernetes.io/projected/e2e7b745-29c6-452c-b9ff-392b476fddd1-kube-api-access-vhwb5\") pod \"keystone-f5f9-account-create-update-29pcz\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.238786 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.292892 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a999-account-create-update-h878s"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.293890 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.296280 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.302186 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.315966 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9989e23c-35ec-4efa-9660-0ad9574db896-operator-scripts\") pod \"placement-db-create-lwsqf\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.316116 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhw7\" (UniqueName: \"kubernetes.io/projected/9989e23c-35ec-4efa-9660-0ad9574db896-kube-api-access-6lhw7\") pod \"placement-db-create-lwsqf\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.321811 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a999-account-create-update-h878s"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.417540 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7hk5\" (UniqueName: \"kubernetes.io/projected/69fae44f-0fc7-41e6-9e73-316ac2e88e40-kube-api-access-w7hk5\") pod \"placement-a999-account-create-update-h878s\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.417615 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lhw7\" (UniqueName: \"kubernetes.io/projected/9989e23c-35ec-4efa-9660-0ad9574db896-kube-api-access-6lhw7\") pod \"placement-db-create-lwsqf\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.417701 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fae44f-0fc7-41e6-9e73-316ac2e88e40-operator-scripts\") pod \"placement-a999-account-create-update-h878s\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.417743 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9989e23c-35ec-4efa-9660-0ad9574db896-operator-scripts\") pod \"placement-db-create-lwsqf\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.418451 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9989e23c-35ec-4efa-9660-0ad9574db896-operator-scripts\") pod \"placement-db-create-lwsqf\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.439715 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lhw7\" (UniqueName: \"kubernetes.io/projected/9989e23c-35ec-4efa-9660-0ad9574db896-kube-api-access-6lhw7\") pod \"placement-db-create-lwsqf\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.519828 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7hk5\" (UniqueName: \"kubernetes.io/projected/69fae44f-0fc7-41e6-9e73-316ac2e88e40-kube-api-access-w7hk5\") pod \"placement-a999-account-create-update-h878s\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.520575 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fae44f-0fc7-41e6-9e73-316ac2e88e40-operator-scripts\") pod \"placement-a999-account-create-update-h878s\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.521327 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fae44f-0fc7-41e6-9e73-316ac2e88e40-operator-scripts\") pod \"placement-a999-account-create-update-h878s\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.528980 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.536565 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7hk5\" (UniqueName: \"kubernetes.io/projected/69fae44f-0fc7-41e6-9e73-316ac2e88e40-kube-api-access-w7hk5\") pod \"placement-a999-account-create-update-h878s\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.619849 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.814302 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wrtvb"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.815490 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.819217 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7hphq" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.819519 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.833688 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wrtvb"] Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.927418 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-combined-ca-bundle\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.927569 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd67x\" (UniqueName: \"kubernetes.io/projected/51675b3c-124f-44aa-b629-c771287652ef-kube-api-access-zd67x\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.927623 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-config-data\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:27 crc kubenswrapper[4900]: I1202 14:02:27.927676 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-db-sync-config-data\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.030696 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd67x\" (UniqueName: \"kubernetes.io/projected/51675b3c-124f-44aa-b629-c771287652ef-kube-api-access-zd67x\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.030775 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-config-data\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.030827 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-db-sync-config-data\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.030872 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-combined-ca-bundle\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.035594 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-config-data\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.037318 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-combined-ca-bundle\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.043284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-db-sync-config-data\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.050047 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd67x\" (UniqueName: \"kubernetes.io/projected/51675b3c-124f-44aa-b629-c771287652ef-kube-api-access-zd67x\") pod \"glance-db-sync-wrtvb\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.156525 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wrtvb" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.286028 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:28 crc kubenswrapper[4900]: I1202 14:02:28.540792 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:28 crc kubenswrapper[4900]: E1202 14:02:28.541265 4900 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:02:28 crc kubenswrapper[4900]: E1202 14:02:28.541289 4900 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:02:28 crc kubenswrapper[4900]: E1202 14:02:28.541353 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift podName:305da939-8e7b-4fce-95f9-95d90218a1f0 nodeName:}" failed. No retries permitted until 2025-12-02 14:02:36.541328428 +0000 UTC m=+1201.957142289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift") pod "swift-storage-0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0") : configmap "swift-ring-files" not found Dec 02 14:02:29 crc kubenswrapper[4900]: I1202 14:02:29.958237 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.011318 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jsk7w"] Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.011525 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-jsk7w" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerName="dnsmasq-dns" containerID="cri-o://168658c3fbd41c7f4d7621ce8f6505e187b6bc6f7fe596a226eb3f34aa4afd32" gracePeriod=10 Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.114228 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lwsqf"] Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.215229 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wrtvb"] Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.238467 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ngxzq"] Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.244515 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f5f9-account-create-update-29pcz"] Dec 02 14:02:30 crc kubenswrapper[4900]: W1202 14:02:30.245840 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fae44f_0fc7_41e6_9e73_316ac2e88e40.slice/crio-410b30226ed736ace3a0750e4dcd445ce68e1733d40b1b4a82156277c183ad9f WatchSource:0}: Error finding container 410b30226ed736ace3a0750e4dcd445ce68e1733d40b1b4a82156277c183ad9f: Status 404 returned error can't find the container with id 410b30226ed736ace3a0750e4dcd445ce68e1733d40b1b4a82156277c183ad9f Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.249835 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a999-account-create-update-h878s"] Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.370120 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwsqf" event={"ID":"9989e23c-35ec-4efa-9660-0ad9574db896","Type":"ContainerStarted","Data":"c630f99583cbed16ffab5fc588ae2a735afc6b676c53185c229cd038d453fe1b"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.370157 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwsqf" event={"ID":"9989e23c-35ec-4efa-9660-0ad9574db896","Type":"ContainerStarted","Data":"13095b78ea81dcaccae0f3d932bd0064de774e7551b8c0a198b0b9784a5b2fd3"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.376729 4900 generic.go:334] "Generic (PLEG): container finished" podID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerID="168658c3fbd41c7f4d7621ce8f6505e187b6bc6f7fe596a226eb3f34aa4afd32" exitCode=0 Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.376788 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jsk7w" event={"ID":"b280cf8d-be09-4643-b9fd-e444c63a0440","Type":"ContainerDied","Data":"168658c3fbd41c7f4d7621ce8f6505e187b6bc6f7fe596a226eb3f34aa4afd32"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.385542 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-lwsqf" podStartSLOduration=3.385524298 podStartE2EDuration="3.385524298s" podCreationTimestamp="2025-12-02 14:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:02:30.384937281 +0000 UTC m=+1195.800751132" watchObservedRunningTime="2025-12-02 14:02:30.385524298 +0000 UTC m=+1195.801338149" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.388861 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5f9-account-create-update-29pcz" event={"ID":"e2e7b745-29c6-452c-b9ff-392b476fddd1","Type":"ContainerStarted","Data":"db04cf11c61e39c667aea9e2962187cfa1ecd3b83d2d73530cd714fdfdc16b98"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.394531 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wrtvb" event={"ID":"51675b3c-124f-44aa-b629-c771287652ef","Type":"ContainerStarted","Data":"2cf38d58c37d457fa20081dfd3426c02b07fcbf5f25094d4c9a96d132566a906"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.397395 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wt5sd" event={"ID":"ea1df1b2-175e-4695-a514-0378d69d38f9","Type":"ContainerStarted","Data":"c529d208dadf5ed6b32c89601f28c3dcd038273fd7d9fdc091fb6f954fe330e7"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.400094 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ngxzq" event={"ID":"fa3c7373-c9f5-4957-abaa-e2719e654d2b","Type":"ContainerStarted","Data":"2c8238a7abebfac964b3297e51f754b1647558dfa3605daec26d5c38fffb3c6c"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.403503 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a999-account-create-update-h878s" event={"ID":"69fae44f-0fc7-41e6-9e73-316ac2e88e40","Type":"ContainerStarted","Data":"410b30226ed736ace3a0750e4dcd445ce68e1733d40b1b4a82156277c183ad9f"} Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.432852 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.460858 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wt5sd" podStartSLOduration=2.477158078 podStartE2EDuration="6.46083948s" podCreationTimestamp="2025-12-02 14:02:24 +0000 UTC" firstStartedPulling="2025-12-02 14:02:25.663310484 +0000 UTC m=+1191.079124335" lastFinishedPulling="2025-12-02 14:02:29.646991876 +0000 UTC m=+1195.062805737" observedRunningTime="2025-12-02 14:02:30.42303827 +0000 UTC m=+1195.838852121" watchObservedRunningTime="2025-12-02 14:02:30.46083948 +0000 UTC m=+1195.876653341" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.575744 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-nb\") pod \"b280cf8d-be09-4643-b9fd-e444c63a0440\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.575838 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-config\") pod \"b280cf8d-be09-4643-b9fd-e444c63a0440\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.575958 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-dns-svc\") pod \"b280cf8d-be09-4643-b9fd-e444c63a0440\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.575990 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-sb\") pod \"b280cf8d-be09-4643-b9fd-e444c63a0440\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.576021 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv25m\" (UniqueName: \"kubernetes.io/projected/b280cf8d-be09-4643-b9fd-e444c63a0440-kube-api-access-sv25m\") pod \"b280cf8d-be09-4643-b9fd-e444c63a0440\" (UID: \"b280cf8d-be09-4643-b9fd-e444c63a0440\") " Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.585841 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b280cf8d-be09-4643-b9fd-e444c63a0440-kube-api-access-sv25m" (OuterVolumeSpecName: "kube-api-access-sv25m") pod "b280cf8d-be09-4643-b9fd-e444c63a0440" (UID: "b280cf8d-be09-4643-b9fd-e444c63a0440"). InnerVolumeSpecName "kube-api-access-sv25m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.618434 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b280cf8d-be09-4643-b9fd-e444c63a0440" (UID: "b280cf8d-be09-4643-b9fd-e444c63a0440"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.619288 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b280cf8d-be09-4643-b9fd-e444c63a0440" (UID: "b280cf8d-be09-4643-b9fd-e444c63a0440"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.619942 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-config" (OuterVolumeSpecName: "config") pod "b280cf8d-be09-4643-b9fd-e444c63a0440" (UID: "b280cf8d-be09-4643-b9fd-e444c63a0440"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.627571 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b280cf8d-be09-4643-b9fd-e444c63a0440" (UID: "b280cf8d-be09-4643-b9fd-e444c63a0440"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.678137 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.678193 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.678213 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.678235 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv25m\" (UniqueName: \"kubernetes.io/projected/b280cf8d-be09-4643-b9fd-e444c63a0440-kube-api-access-sv25m\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:30 crc kubenswrapper[4900]: I1202 14:02:30.678252 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b280cf8d-be09-4643-b9fd-e444c63a0440-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.426685 4900 generic.go:334] "Generic (PLEG): container finished" podID="69fae44f-0fc7-41e6-9e73-316ac2e88e40" containerID="f9bf858d09a52dc8dcf968ca816270985911f7bf9e4f15a8be6978b0c25c46b5" exitCode=0 Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.426869 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a999-account-create-update-h878s" event={"ID":"69fae44f-0fc7-41e6-9e73-316ac2e88e40","Type":"ContainerDied","Data":"f9bf858d09a52dc8dcf968ca816270985911f7bf9e4f15a8be6978b0c25c46b5"} Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.431079 4900 generic.go:334] "Generic (PLEG): container finished" podID="9989e23c-35ec-4efa-9660-0ad9574db896" containerID="c630f99583cbed16ffab5fc588ae2a735afc6b676c53185c229cd038d453fe1b" exitCode=0 Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.431142 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwsqf" event={"ID":"9989e23c-35ec-4efa-9660-0ad9574db896","Type":"ContainerDied","Data":"c630f99583cbed16ffab5fc588ae2a735afc6b676c53185c229cd038d453fe1b"} Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.434138 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jsk7w" event={"ID":"b280cf8d-be09-4643-b9fd-e444c63a0440","Type":"ContainerDied","Data":"8ae7910e7b91c545c62c9cb9a115382dd9cd21e4e16484aaa6a1fcaa76fbb403"} Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.434200 4900 scope.go:117] "RemoveContainer" containerID="168658c3fbd41c7f4d7621ce8f6505e187b6bc6f7fe596a226eb3f34aa4afd32" Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.434360 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jsk7w" Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.447327 4900 generic.go:334] "Generic (PLEG): container finished" podID="e2e7b745-29c6-452c-b9ff-392b476fddd1" containerID="802a012ebaf25a3865974158aa1e628674a0cf876aad0fc0d5862b083675bce5" exitCode=0 Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.447381 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5f9-account-create-update-29pcz" event={"ID":"e2e7b745-29c6-452c-b9ff-392b476fddd1","Type":"ContainerDied","Data":"802a012ebaf25a3865974158aa1e628674a0cf876aad0fc0d5862b083675bce5"} Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.449733 4900 generic.go:334] "Generic (PLEG): container finished" podID="fa3c7373-c9f5-4957-abaa-e2719e654d2b" containerID="e4169f4de5b556c9ce6324a489231fb4175368c98080d2ae68fb1b574e2bdaf0" exitCode=0 Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.449760 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ngxzq" event={"ID":"fa3c7373-c9f5-4957-abaa-e2719e654d2b","Type":"ContainerDied","Data":"e4169f4de5b556c9ce6324a489231fb4175368c98080d2ae68fb1b574e2bdaf0"} Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.469738 4900 scope.go:117] "RemoveContainer" containerID="42cec345ec4ad876d44e4fff9e709d51ebfb2245f09239348d617a61574255bf" Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.505991 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jsk7w"] Dec 02 14:02:31 crc kubenswrapper[4900]: I1202 14:02:31.514097 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jsk7w"] Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.908982 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.931409 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lhw7\" (UniqueName: \"kubernetes.io/projected/9989e23c-35ec-4efa-9660-0ad9574db896-kube-api-access-6lhw7\") pod \"9989e23c-35ec-4efa-9660-0ad9574db896\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.931515 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9989e23c-35ec-4efa-9660-0ad9574db896-operator-scripts\") pod \"9989e23c-35ec-4efa-9660-0ad9574db896\" (UID: \"9989e23c-35ec-4efa-9660-0ad9574db896\") " Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.932824 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" path="/var/lib/kubelet/pods/b280cf8d-be09-4643-b9fd-e444c63a0440/volumes" Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.934919 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9989e23c-35ec-4efa-9660-0ad9574db896-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9989e23c-35ec-4efa-9660-0ad9574db896" (UID: "9989e23c-35ec-4efa-9660-0ad9574db896"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.944886 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9989e23c-35ec-4efa-9660-0ad9574db896-kube-api-access-6lhw7" (OuterVolumeSpecName: "kube-api-access-6lhw7") pod "9989e23c-35ec-4efa-9660-0ad9574db896" (UID: "9989e23c-35ec-4efa-9660-0ad9574db896"). InnerVolumeSpecName "kube-api-access-6lhw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:32 crc kubenswrapper[4900]: I1202 14:02:32.993118 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.000779 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.004362 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.033782 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjcfn\" (UniqueName: \"kubernetes.io/projected/fa3c7373-c9f5-4957-abaa-e2719e654d2b-kube-api-access-hjcfn\") pod \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.033856 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhwb5\" (UniqueName: \"kubernetes.io/projected/e2e7b745-29c6-452c-b9ff-392b476fddd1-kube-api-access-vhwb5\") pod \"e2e7b745-29c6-452c-b9ff-392b476fddd1\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.033936 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fae44f-0fc7-41e6-9e73-316ac2e88e40-operator-scripts\") pod \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.034021 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c7373-c9f5-4957-abaa-e2719e654d2b-operator-scripts\") pod \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\" (UID: \"fa3c7373-c9f5-4957-abaa-e2719e654d2b\") " Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.034110 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e7b745-29c6-452c-b9ff-392b476fddd1-operator-scripts\") pod \"e2e7b745-29c6-452c-b9ff-392b476fddd1\" (UID: \"e2e7b745-29c6-452c-b9ff-392b476fddd1\") " Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.034403 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7hk5\" (UniqueName: \"kubernetes.io/projected/69fae44f-0fc7-41e6-9e73-316ac2e88e40-kube-api-access-w7hk5\") pod \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\" (UID: \"69fae44f-0fc7-41e6-9e73-316ac2e88e40\") " Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.034989 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fae44f-0fc7-41e6-9e73-316ac2e88e40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69fae44f-0fc7-41e6-9e73-316ac2e88e40" (UID: "69fae44f-0fc7-41e6-9e73-316ac2e88e40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.035371 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lhw7\" (UniqueName: \"kubernetes.io/projected/9989e23c-35ec-4efa-9660-0ad9574db896-kube-api-access-6lhw7\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.035389 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fae44f-0fc7-41e6-9e73-316ac2e88e40-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.035428 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9989e23c-35ec-4efa-9660-0ad9574db896-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.036058 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa3c7373-c9f5-4957-abaa-e2719e654d2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa3c7373-c9f5-4957-abaa-e2719e654d2b" (UID: "fa3c7373-c9f5-4957-abaa-e2719e654d2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.036806 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2e7b745-29c6-452c-b9ff-392b476fddd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2e7b745-29c6-452c-b9ff-392b476fddd1" (UID: "e2e7b745-29c6-452c-b9ff-392b476fddd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.042185 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fae44f-0fc7-41e6-9e73-316ac2e88e40-kube-api-access-w7hk5" (OuterVolumeSpecName: "kube-api-access-w7hk5") pod "69fae44f-0fc7-41e6-9e73-316ac2e88e40" (UID: "69fae44f-0fc7-41e6-9e73-316ac2e88e40"). InnerVolumeSpecName "kube-api-access-w7hk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.043101 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e7b745-29c6-452c-b9ff-392b476fddd1-kube-api-access-vhwb5" (OuterVolumeSpecName: "kube-api-access-vhwb5") pod "e2e7b745-29c6-452c-b9ff-392b476fddd1" (UID: "e2e7b745-29c6-452c-b9ff-392b476fddd1"). InnerVolumeSpecName "kube-api-access-vhwb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.043181 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3c7373-c9f5-4957-abaa-e2719e654d2b-kube-api-access-hjcfn" (OuterVolumeSpecName: "kube-api-access-hjcfn") pod "fa3c7373-c9f5-4957-abaa-e2719e654d2b" (UID: "fa3c7373-c9f5-4957-abaa-e2719e654d2b"). InnerVolumeSpecName "kube-api-access-hjcfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.137947 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjcfn\" (UniqueName: \"kubernetes.io/projected/fa3c7373-c9f5-4957-abaa-e2719e654d2b-kube-api-access-hjcfn\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.138008 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhwb5\" (UniqueName: \"kubernetes.io/projected/e2e7b745-29c6-452c-b9ff-392b476fddd1-kube-api-access-vhwb5\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.138029 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c7373-c9f5-4957-abaa-e2719e654d2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.138048 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e7b745-29c6-452c-b9ff-392b476fddd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.138068 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7hk5\" (UniqueName: \"kubernetes.io/projected/69fae44f-0fc7-41e6-9e73-316ac2e88e40-kube-api-access-w7hk5\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.493360 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f5f9-account-create-update-29pcz" event={"ID":"e2e7b745-29c6-452c-b9ff-392b476fddd1","Type":"ContainerDied","Data":"db04cf11c61e39c667aea9e2962187cfa1ecd3b83d2d73530cd714fdfdc16b98"} Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.493802 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db04cf11c61e39c667aea9e2962187cfa1ecd3b83d2d73530cd714fdfdc16b98" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.493401 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f5f9-account-create-update-29pcz" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.495602 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ngxzq" event={"ID":"fa3c7373-c9f5-4957-abaa-e2719e654d2b","Type":"ContainerDied","Data":"2c8238a7abebfac964b3297e51f754b1647558dfa3605daec26d5c38fffb3c6c"} Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.495676 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8238a7abebfac964b3297e51f754b1647558dfa3605daec26d5c38fffb3c6c" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.495731 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ngxzq" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.499622 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a999-account-create-update-h878s" event={"ID":"69fae44f-0fc7-41e6-9e73-316ac2e88e40","Type":"ContainerDied","Data":"410b30226ed736ace3a0750e4dcd445ce68e1733d40b1b4a82156277c183ad9f"} Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.499690 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="410b30226ed736ace3a0750e4dcd445ce68e1733d40b1b4a82156277c183ad9f" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.499762 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a999-account-create-update-h878s" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.501146 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwsqf" event={"ID":"9989e23c-35ec-4efa-9660-0ad9574db896","Type":"ContainerDied","Data":"13095b78ea81dcaccae0f3d932bd0064de774e7551b8c0a198b0b9784a5b2fd3"} Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.501173 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwsqf" Dec 02 14:02:33 crc kubenswrapper[4900]: I1202 14:02:33.501173 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13095b78ea81dcaccae0f3d932bd0064de774e7551b8c0a198b0b9784a5b2fd3" Dec 02 14:02:33 crc kubenswrapper[4900]: W1202 14:02:33.524203 4900 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e7b745_29c6_452c_b9ff_392b476fddd1.slice/pids.max": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e7b745_29c6_452c_b9ff_392b476fddd1.slice/pids.max: no such device Dec 02 14:02:33 crc kubenswrapper[4900]: E1202 14:02:33.564663 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e7b745_29c6_452c_b9ff_392b476fddd1.slice/crio-db04cf11c61e39c667aea9e2962187cfa1ecd3b83d2d73530cd714fdfdc16b98\": RecentStats: unable to find data in memory cache]" Dec 02 14:02:34 crc kubenswrapper[4900]: I1202 14:02:34.955675 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 02 14:02:36 crc kubenswrapper[4900]: I1202 14:02:36.612033 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:36 crc kubenswrapper[4900]: E1202 14:02:36.612240 4900 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:02:36 crc kubenswrapper[4900]: E1202 14:02:36.612450 4900 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:02:36 crc kubenswrapper[4900]: E1202 14:02:36.612534 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift podName:305da939-8e7b-4fce-95f9-95d90218a1f0 nodeName:}" failed. No retries permitted until 2025-12-02 14:02:52.612511252 +0000 UTC m=+1218.028325123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift") pod "swift-storage-0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0") : configmap "swift-ring-files" not found Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.725482 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.726698 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.749160 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gn6td" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" probeResult="failure" output=< Dec 02 14:02:42 crc kubenswrapper[4900]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 14:02:42 crc kubenswrapper[4900]: > Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.950956 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gn6td-config-xz842"] Dec 02 14:02:42 crc kubenswrapper[4900]: E1202 14:02:42.951384 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerName="dnsmasq-dns" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.951401 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerName="dnsmasq-dns" Dec 02 14:02:42 crc kubenswrapper[4900]: E1202 14:02:42.951422 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e7b745-29c6-452c-b9ff-392b476fddd1" containerName="mariadb-account-create-update" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.951428 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e7b745-29c6-452c-b9ff-392b476fddd1" containerName="mariadb-account-create-update" Dec 02 14:02:42 crc kubenswrapper[4900]: E1202 14:02:42.951445 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fae44f-0fc7-41e6-9e73-316ac2e88e40" containerName="mariadb-account-create-update" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.951451 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fae44f-0fc7-41e6-9e73-316ac2e88e40" containerName="mariadb-account-create-update" Dec 02 14:02:42 crc kubenswrapper[4900]: E1202 14:02:42.951461 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerName="init" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.951466 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerName="init" Dec 02 14:02:42 crc kubenswrapper[4900]: E1202 14:02:42.951476 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3c7373-c9f5-4957-abaa-e2719e654d2b" containerName="mariadb-database-create" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.951484 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3c7373-c9f5-4957-abaa-e2719e654d2b" containerName="mariadb-database-create" Dec 02 14:02:42 crc kubenswrapper[4900]: E1202 14:02:42.951498 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9989e23c-35ec-4efa-9660-0ad9574db896" containerName="mariadb-database-create" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.951504 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9989e23c-35ec-4efa-9660-0ad9574db896" containerName="mariadb-database-create" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960287 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3c7373-c9f5-4957-abaa-e2719e654d2b" containerName="mariadb-database-create" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960324 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="69fae44f-0fc7-41e6-9e73-316ac2e88e40" containerName="mariadb-account-create-update" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960338 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9989e23c-35ec-4efa-9660-0ad9574db896" containerName="mariadb-database-create" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960347 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b280cf8d-be09-4643-b9fd-e444c63a0440" containerName="dnsmasq-dns" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960362 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e7b745-29c6-452c-b9ff-392b476fddd1" containerName="mariadb-account-create-update" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960907 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gn6td-config-xz842"] Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.960992 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:42 crc kubenswrapper[4900]: I1202 14:02:42.966291 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.032011 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bktr\" (UniqueName: \"kubernetes.io/projected/a7c555c7-f53c-49ff-8c19-07b895e28b47-kube-api-access-7bktr\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.032062 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run-ovn\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.032107 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.032135 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-scripts\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.032198 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-log-ovn\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.032216 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-additional-scripts\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.133276 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bktr\" (UniqueName: \"kubernetes.io/projected/a7c555c7-f53c-49ff-8c19-07b895e28b47-kube-api-access-7bktr\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.133322 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run-ovn\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.133362 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.133386 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-scripts\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.133448 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-log-ovn\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.133466 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-additional-scripts\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.134076 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-additional-scripts\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.134523 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run-ovn\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.134570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.135709 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-log-ovn\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.136197 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-scripts\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.152739 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bktr\" (UniqueName: \"kubernetes.io/projected/a7c555c7-f53c-49ff-8c19-07b895e28b47-kube-api-access-7bktr\") pod \"ovn-controller-gn6td-config-xz842\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.287177 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.615982 4900 generic.go:334] "Generic (PLEG): container finished" podID="ea1df1b2-175e-4695-a514-0378d69d38f9" containerID="c529d208dadf5ed6b32c89601f28c3dcd038273fd7d9fdc091fb6f954fe330e7" exitCode=0 Dec 02 14:02:43 crc kubenswrapper[4900]: I1202 14:02:43.616072 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wt5sd" event={"ID":"ea1df1b2-175e-4695-a514-0378d69d38f9","Type":"ContainerDied","Data":"c529d208dadf5ed6b32c89601f28c3dcd038273fd7d9fdc091fb6f954fe330e7"} Dec 02 14:02:45 crc kubenswrapper[4900]: E1202 14:02:45.211025 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 02 14:02:45 crc kubenswrapper[4900]: E1202 14:02:45.211713 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd67x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wrtvb_openstack(51675b3c-124f-44aa-b629-c771287652ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:02:45 crc kubenswrapper[4900]: E1202 14:02:45.213211 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wrtvb" podUID="51675b3c-124f-44aa-b629-c771287652ef" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.251105 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280280 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-dispersionconf\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280363 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-scripts\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280404 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9chk\" (UniqueName: \"kubernetes.io/projected/ea1df1b2-175e-4695-a514-0378d69d38f9-kube-api-access-c9chk\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280471 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-ring-data-devices\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280595 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-combined-ca-bundle\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280707 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-swiftconf\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.280775 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea1df1b2-175e-4695-a514-0378d69d38f9-etc-swift\") pod \"ea1df1b2-175e-4695-a514-0378d69d38f9\" (UID: \"ea1df1b2-175e-4695-a514-0378d69d38f9\") " Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.281905 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.282344 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1df1b2-175e-4695-a514-0378d69d38f9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.309831 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.310016 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1df1b2-175e-4695-a514-0378d69d38f9-kube-api-access-c9chk" (OuterVolumeSpecName: "kube-api-access-c9chk") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "kube-api-access-c9chk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.311477 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.320473 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-scripts" (OuterVolumeSpecName: "scripts") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.332902 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea1df1b2-175e-4695-a514-0378d69d38f9" (UID: "ea1df1b2-175e-4695-a514-0378d69d38f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383476 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383515 4900 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383529 4900 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea1df1b2-175e-4695-a514-0378d69d38f9-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383540 4900 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea1df1b2-175e-4695-a514-0378d69d38f9-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383551 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383562 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9chk\" (UniqueName: \"kubernetes.io/projected/ea1df1b2-175e-4695-a514-0378d69d38f9-kube-api-access-c9chk\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.383576 4900 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea1df1b2-175e-4695-a514-0378d69d38f9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.635148 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wt5sd" event={"ID":"ea1df1b2-175e-4695-a514-0378d69d38f9","Type":"ContainerDied","Data":"1183477ee386fbdde99ba893b30ff034cfe3619272d0b4f834d4cae14c08a03b"} Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.635379 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1183477ee386fbdde99ba893b30ff034cfe3619272d0b4f834d4cae14c08a03b" Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.635358 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wt5sd" Dec 02 14:02:45 crc kubenswrapper[4900]: E1202 14:02:45.637264 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-wrtvb" podUID="51675b3c-124f-44aa-b629-c771287652ef" Dec 02 14:02:45 crc kubenswrapper[4900]: W1202 14:02:45.640759 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7c555c7_f53c_49ff_8c19_07b895e28b47.slice/crio-63236973cc1d1470dd5aa7beb50bcd7e3422fda6d2ed5188030b54b1b6358226 WatchSource:0}: Error finding container 63236973cc1d1470dd5aa7beb50bcd7e3422fda6d2ed5188030b54b1b6358226: Status 404 returned error can't find the container with id 63236973cc1d1470dd5aa7beb50bcd7e3422fda6d2ed5188030b54b1b6358226 Dec 02 14:02:45 crc kubenswrapper[4900]: I1202 14:02:45.646922 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gn6td-config-xz842"] Dec 02 14:02:46 crc kubenswrapper[4900]: I1202 14:02:46.644684 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-xz842" event={"ID":"a7c555c7-f53c-49ff-8c19-07b895e28b47","Type":"ContainerStarted","Data":"63236973cc1d1470dd5aa7beb50bcd7e3422fda6d2ed5188030b54b1b6358226"} Dec 02 14:02:47 crc kubenswrapper[4900]: I1202 14:02:47.656914 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-xz842" event={"ID":"a7c555c7-f53c-49ff-8c19-07b895e28b47","Type":"ContainerDied","Data":"4ee103e5b9c065503521c96cc354e2b1419987ab208022be75f831cef4f7e7a6"} Dec 02 14:02:47 crc kubenswrapper[4900]: I1202 14:02:47.656933 4900 generic.go:334] "Generic (PLEG): container finished" podID="a7c555c7-f53c-49ff-8c19-07b895e28b47" containerID="4ee103e5b9c065503521c96cc354e2b1419987ab208022be75f831cef4f7e7a6" exitCode=0 Dec 02 14:02:47 crc kubenswrapper[4900]: I1202 14:02:47.745275 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gn6td" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.072241 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148240 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bktr\" (UniqueName: \"kubernetes.io/projected/a7c555c7-f53c-49ff-8c19-07b895e28b47-kube-api-access-7bktr\") pod \"a7c555c7-f53c-49ff-8c19-07b895e28b47\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148341 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-scripts\") pod \"a7c555c7-f53c-49ff-8c19-07b895e28b47\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148475 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run\") pod \"a7c555c7-f53c-49ff-8c19-07b895e28b47\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148510 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-additional-scripts\") pod \"a7c555c7-f53c-49ff-8c19-07b895e28b47\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148575 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-log-ovn\") pod \"a7c555c7-f53c-49ff-8c19-07b895e28b47\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148717 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run-ovn\") pod \"a7c555c7-f53c-49ff-8c19-07b895e28b47\" (UID: \"a7c555c7-f53c-49ff-8c19-07b895e28b47\") " Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.148985 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run" (OuterVolumeSpecName: "var-run") pod "a7c555c7-f53c-49ff-8c19-07b895e28b47" (UID: "a7c555c7-f53c-49ff-8c19-07b895e28b47"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149069 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a7c555c7-f53c-49ff-8c19-07b895e28b47" (UID: "a7c555c7-f53c-49ff-8c19-07b895e28b47"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149191 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a7c555c7-f53c-49ff-8c19-07b895e28b47" (UID: "a7c555c7-f53c-49ff-8c19-07b895e28b47"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149702 4900 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149744 4900 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149771 4900 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7c555c7-f53c-49ff-8c19-07b895e28b47-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149766 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a7c555c7-f53c-49ff-8c19-07b895e28b47" (UID: "a7c555c7-f53c-49ff-8c19-07b895e28b47"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.149905 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-scripts" (OuterVolumeSpecName: "scripts") pod "a7c555c7-f53c-49ff-8c19-07b895e28b47" (UID: "a7c555c7-f53c-49ff-8c19-07b895e28b47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.155346 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c555c7-f53c-49ff-8c19-07b895e28b47-kube-api-access-7bktr" (OuterVolumeSpecName: "kube-api-access-7bktr") pod "a7c555c7-f53c-49ff-8c19-07b895e28b47" (UID: "a7c555c7-f53c-49ff-8c19-07b895e28b47"). InnerVolumeSpecName "kube-api-access-7bktr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.250819 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bktr\" (UniqueName: \"kubernetes.io/projected/a7c555c7-f53c-49ff-8c19-07b895e28b47-kube-api-access-7bktr\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.250857 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.250866 4900 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a7c555c7-f53c-49ff-8c19-07b895e28b47-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.677448 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-xz842" event={"ID":"a7c555c7-f53c-49ff-8c19-07b895e28b47","Type":"ContainerDied","Data":"63236973cc1d1470dd5aa7beb50bcd7e3422fda6d2ed5188030b54b1b6358226"} Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.677488 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63236973cc1d1470dd5aa7beb50bcd7e3422fda6d2ed5188030b54b1b6358226" Dec 02 14:02:49 crc kubenswrapper[4900]: I1202 14:02:49.677808 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-xz842" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.221920 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gn6td-config-xz842"] Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.232016 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gn6td-config-xz842"] Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.308988 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gn6td-config-zhpwl"] Dec 02 14:02:50 crc kubenswrapper[4900]: E1202 14:02:50.309378 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1df1b2-175e-4695-a514-0378d69d38f9" containerName="swift-ring-rebalance" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.309397 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1df1b2-175e-4695-a514-0378d69d38f9" containerName="swift-ring-rebalance" Dec 02 14:02:50 crc kubenswrapper[4900]: E1202 14:02:50.309421 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c555c7-f53c-49ff-8c19-07b895e28b47" containerName="ovn-config" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.309431 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c555c7-f53c-49ff-8c19-07b895e28b47" containerName="ovn-config" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.309638 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1df1b2-175e-4695-a514-0378d69d38f9" containerName="swift-ring-rebalance" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.309673 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c555c7-f53c-49ff-8c19-07b895e28b47" containerName="ovn-config" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.310729 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.312822 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.325796 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gn6td-config-zhpwl"] Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.372673 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-additional-scripts\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.372722 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.372768 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-log-ovn\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.372911 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-scripts\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.373016 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run-ovn\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.373295 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shth6\" (UniqueName: \"kubernetes.io/projected/661ea294-f433-451c-93a7-91ef2b794e79-kube-api-access-shth6\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475068 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-scripts\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475156 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run-ovn\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475393 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shth6\" (UniqueName: \"kubernetes.io/projected/661ea294-f433-451c-93a7-91ef2b794e79-kube-api-access-shth6\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475470 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-additional-scripts\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475509 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475545 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-log-ovn\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475604 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run-ovn\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475706 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.475718 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-log-ovn\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.476749 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-additional-scripts\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.478410 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-scripts\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.497898 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shth6\" (UniqueName: \"kubernetes.io/projected/661ea294-f433-451c-93a7-91ef2b794e79-kube-api-access-shth6\") pod \"ovn-controller-gn6td-config-zhpwl\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.634604 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.694061 4900 generic.go:334] "Generic (PLEG): container finished" podID="e410de46-b373-431a-8486-21a6f1268e41" containerID="5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168" exitCode=0 Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.694195 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e410de46-b373-431a-8486-21a6f1268e41","Type":"ContainerDied","Data":"5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168"} Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.697389 4900 generic.go:334] "Generic (PLEG): container finished" podID="8db82600-180c-4114-8006-551e1b566ce5" containerID="1edf53c496618c33923bb60078803c42df40f981136a82e37805dfe6b475de7b" exitCode=0 Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.697436 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8db82600-180c-4114-8006-551e1b566ce5","Type":"ContainerDied","Data":"1edf53c496618c33923bb60078803c42df40f981136a82e37805dfe6b475de7b"} Dec 02 14:02:50 crc kubenswrapper[4900]: I1202 14:02:50.939117 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c555c7-f53c-49ff-8c19-07b895e28b47" path="/var/lib/kubelet/pods/a7c555c7-f53c-49ff-8c19-07b895e28b47/volumes" Dec 02 14:02:51 crc kubenswrapper[4900]: W1202 14:02:51.183720 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod661ea294_f433_451c_93a7_91ef2b794e79.slice/crio-582a869f82f0657b6dd2108497e751f08121c25c5c295fe2d9be7e8f0413bb63 WatchSource:0}: Error finding container 582a869f82f0657b6dd2108497e751f08121c25c5c295fe2d9be7e8f0413bb63: Status 404 returned error can't find the container with id 582a869f82f0657b6dd2108497e751f08121c25c5c295fe2d9be7e8f0413bb63 Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.184131 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gn6td-config-zhpwl"] Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.706925 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e410de46-b373-431a-8486-21a6f1268e41","Type":"ContainerStarted","Data":"983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403"} Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.707599 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.711169 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8db82600-180c-4114-8006-551e1b566ce5","Type":"ContainerStarted","Data":"c9b48d55f32d54ed9f77fab0b281d7e2bb2a1783f7388f1bec82ef0b685bf983"} Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.711435 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.715398 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-zhpwl" event={"ID":"661ea294-f433-451c-93a7-91ef2b794e79","Type":"ContainerStarted","Data":"b0df1054bf4eef242497c25a69090f320c164e3ebb388b369e71485192e05d15"} Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.715437 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-zhpwl" event={"ID":"661ea294-f433-451c-93a7-91ef2b794e79","Type":"ContainerStarted","Data":"582a869f82f0657b6dd2108497e751f08121c25c5c295fe2d9be7e8f0413bb63"} Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.740984 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.842560173 podStartE2EDuration="1m19.740960048s" podCreationTimestamp="2025-12-02 14:01:32 +0000 UTC" firstStartedPulling="2025-12-02 14:01:34.621816398 +0000 UTC m=+1140.037630249" lastFinishedPulling="2025-12-02 14:02:15.520216273 +0000 UTC m=+1180.936030124" observedRunningTime="2025-12-02 14:02:51.735079113 +0000 UTC m=+1217.150892984" watchObservedRunningTime="2025-12-02 14:02:51.740960048 +0000 UTC m=+1217.156773919" Dec 02 14:02:51 crc kubenswrapper[4900]: I1202 14:02:51.771308 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371957.083487 podStartE2EDuration="1m19.771288968s" podCreationTimestamp="2025-12-02 14:01:32 +0000 UTC" firstStartedPulling="2025-12-02 14:01:34.905861954 +0000 UTC m=+1140.321675805" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:02:51.76708473 +0000 UTC m=+1217.182898601" watchObservedRunningTime="2025-12-02 14:02:51.771288968 +0000 UTC m=+1217.187102819" Dec 02 14:02:52 crc kubenswrapper[4900]: I1202 14:02:52.623161 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:52 crc kubenswrapper[4900]: I1202 14:02:52.630247 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"swift-storage-0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " pod="openstack/swift-storage-0" Dec 02 14:02:52 crc kubenswrapper[4900]: I1202 14:02:52.738596 4900 generic.go:334] "Generic (PLEG): container finished" podID="661ea294-f433-451c-93a7-91ef2b794e79" containerID="b0df1054bf4eef242497c25a69090f320c164e3ebb388b369e71485192e05d15" exitCode=0 Dec 02 14:02:52 crc kubenswrapper[4900]: I1202 14:02:52.740272 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-zhpwl" event={"ID":"661ea294-f433-451c-93a7-91ef2b794e79","Type":"ContainerDied","Data":"b0df1054bf4eef242497c25a69090f320c164e3ebb388b369e71485192e05d15"} Dec 02 14:02:52 crc kubenswrapper[4900]: I1202 14:02:52.795406 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.062444 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135389 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run-ovn\") pod \"661ea294-f433-451c-93a7-91ef2b794e79\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135450 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-log-ovn\") pod \"661ea294-f433-451c-93a7-91ef2b794e79\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135538 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "661ea294-f433-451c-93a7-91ef2b794e79" (UID: "661ea294-f433-451c-93a7-91ef2b794e79"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135575 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "661ea294-f433-451c-93a7-91ef2b794e79" (UID: "661ea294-f433-451c-93a7-91ef2b794e79"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135661 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run\") pod \"661ea294-f433-451c-93a7-91ef2b794e79\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135755 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shth6\" (UniqueName: \"kubernetes.io/projected/661ea294-f433-451c-93a7-91ef2b794e79-kube-api-access-shth6\") pod \"661ea294-f433-451c-93a7-91ef2b794e79\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135758 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run" (OuterVolumeSpecName: "var-run") pod "661ea294-f433-451c-93a7-91ef2b794e79" (UID: "661ea294-f433-451c-93a7-91ef2b794e79"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135889 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-additional-scripts\") pod \"661ea294-f433-451c-93a7-91ef2b794e79\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.135951 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-scripts\") pod \"661ea294-f433-451c-93a7-91ef2b794e79\" (UID: \"661ea294-f433-451c-93a7-91ef2b794e79\") " Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.136412 4900 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.136436 4900 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.136446 4900 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/661ea294-f433-451c-93a7-91ef2b794e79-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.136633 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "661ea294-f433-451c-93a7-91ef2b794e79" (UID: "661ea294-f433-451c-93a7-91ef2b794e79"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.136857 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-scripts" (OuterVolumeSpecName: "scripts") pod "661ea294-f433-451c-93a7-91ef2b794e79" (UID: "661ea294-f433-451c-93a7-91ef2b794e79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.143021 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661ea294-f433-451c-93a7-91ef2b794e79-kube-api-access-shth6" (OuterVolumeSpecName: "kube-api-access-shth6") pod "661ea294-f433-451c-93a7-91ef2b794e79" (UID: "661ea294-f433-451c-93a7-91ef2b794e79"). InnerVolumeSpecName "kube-api-access-shth6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.238735 4900 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.238795 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/661ea294-f433-451c-93a7-91ef2b794e79-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.238815 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shth6\" (UniqueName: \"kubernetes.io/projected/661ea294-f433-451c-93a7-91ef2b794e79-kube-api-access-shth6\") on node \"crc\" DevicePath \"\"" Dec 02 14:02:53 crc kubenswrapper[4900]: W1202 14:02:53.434923 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-a02baaad4629d7d0beb5a9e4cdf39ed709a63466f9a823870e16e8f6f9fa9d3a WatchSource:0}: Error finding container a02baaad4629d7d0beb5a9e4cdf39ed709a63466f9a823870e16e8f6f9fa9d3a: Status 404 returned error can't find the container with id a02baaad4629d7d0beb5a9e4cdf39ed709a63466f9a823870e16e8f6f9fa9d3a Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.436117 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.748872 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"a02baaad4629d7d0beb5a9e4cdf39ed709a63466f9a823870e16e8f6f9fa9d3a"} Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.750945 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td-config-zhpwl" event={"ID":"661ea294-f433-451c-93a7-91ef2b794e79","Type":"ContainerDied","Data":"582a869f82f0657b6dd2108497e751f08121c25c5c295fe2d9be7e8f0413bb63"} Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.750966 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="582a869f82f0657b6dd2108497e751f08121c25c5c295fe2d9be7e8f0413bb63" Dec 02 14:02:53 crc kubenswrapper[4900]: I1202 14:02:53.751080 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td-config-zhpwl" Dec 02 14:02:54 crc kubenswrapper[4900]: I1202 14:02:54.145310 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gn6td-config-zhpwl"] Dec 02 14:02:54 crc kubenswrapper[4900]: I1202 14:02:54.155118 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gn6td-config-zhpwl"] Dec 02 14:02:54 crc kubenswrapper[4900]: I1202 14:02:54.926483 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661ea294-f433-451c-93a7-91ef2b794e79" path="/var/lib/kubelet/pods/661ea294-f433-451c-93a7-91ef2b794e79/volumes" Dec 02 14:02:55 crc kubenswrapper[4900]: I1202 14:02:55.770817 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"304355d78e40f6ca3b22a607c420ecdb93fd14f1a0a1d10ee78e70aca9138742"} Dec 02 14:02:55 crc kubenswrapper[4900]: I1202 14:02:55.771231 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"5e0242301bbd13a18a7ab682fc5ef7d58a6f6c86146abab5ab241882c022c72e"} Dec 02 14:02:56 crc kubenswrapper[4900]: I1202 14:02:56.782930 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"535a4b01d9acc099e8e0cf36306f3d1613b8d40a0a2886c27a5e3adb4d22106c"} Dec 02 14:03:02 crc kubenswrapper[4900]: I1202 14:03:02.849074 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"10a9aaa8d1a2413e0ef899da8043a3d293c39ba29883684daac125f654e247c6"} Dec 02 14:03:04 crc kubenswrapper[4900]: I1202 14:03:04.158596 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 02 14:03:04 crc kubenswrapper[4900]: I1202 14:03:04.391403 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 02 14:03:05 crc kubenswrapper[4900]: I1202 14:03:05.887469 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wrtvb" event={"ID":"51675b3c-124f-44aa-b629-c771287652ef","Type":"ContainerStarted","Data":"a1ffea810a8add4b42ba35ba2e8c0050d0718defcd9ccaeab6fc931cff075942"} Dec 02 14:03:05 crc kubenswrapper[4900]: I1202 14:03:05.930763 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wrtvb" podStartSLOduration=4.444530728 podStartE2EDuration="38.930738328s" podCreationTimestamp="2025-12-02 14:02:27 +0000 UTC" firstStartedPulling="2025-12-02 14:02:30.229284466 +0000 UTC m=+1195.645098317" lastFinishedPulling="2025-12-02 14:03:04.715492046 +0000 UTC m=+1230.131305917" observedRunningTime="2025-12-02 14:03:05.923781493 +0000 UTC m=+1231.339595344" watchObservedRunningTime="2025-12-02 14:03:05.930738328 +0000 UTC m=+1231.346552179" Dec 02 14:03:06 crc kubenswrapper[4900]: I1202 14:03:06.902469 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"bf6dbc2d90f268fe7fed54cad255fdedb06111980e4d028a6b734115fcd4bff2"} Dec 02 14:03:07 crc kubenswrapper[4900]: I1202 14:03:07.918221 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"8e6a67bb6f1294f115624e7162a130f3eabff83ef59d7b2a1a87dc5e03f7e6e7"} Dec 02 14:03:07 crc kubenswrapper[4900]: I1202 14:03:07.918605 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"38eff436fe11c5890e207833fe423224c1e521b3b82a519361fefcff2af660ad"} Dec 02 14:03:07 crc kubenswrapper[4900]: I1202 14:03:07.918617 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"46f15348813d8055006838bad9d40dbb909e9eabfc30521e1baeaf728552da63"} Dec 02 14:03:09 crc kubenswrapper[4900]: I1202 14:03:09.948309 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"b9903237aae7b30e4786154f26720bc4cccb8456c76a64b913e79db33e9723cc"} Dec 02 14:03:09 crc kubenswrapper[4900]: I1202 14:03:09.948992 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"25219f1dbf2d7a01dd6cfe25cfa91ecaaaabec4527f8896e9ed0b10b42b25db3"} Dec 02 14:03:09 crc kubenswrapper[4900]: I1202 14:03:09.949022 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"7f2a46fb8785892c4a865fae00d8ed6142ab75fae046b42634d84a99c5fcf69d"} Dec 02 14:03:10 crc kubenswrapper[4900]: I1202 14:03:10.964053 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"6f12993e1fc195acb36a4222c9e80cc1d4aeaa566382dddf8b897df3ae681468"} Dec 02 14:03:10 crc kubenswrapper[4900]: I1202 14:03:10.964801 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"92b595b2d89b2be8cfc2216546011c9aad218c2d134cbf0d7dd2eeded97e32ae"} Dec 02 14:03:10 crc kubenswrapper[4900]: I1202 14:03:10.964812 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c"} Dec 02 14:03:10 crc kubenswrapper[4900]: I1202 14:03:10.964821 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerStarted","Data":"5821e46042485c1373fa8cae7c61b288c7c4cea999d146348d992d1f1ebe01ae"} Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.012887 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.278614882 podStartE2EDuration="52.012871866s" podCreationTimestamp="2025-12-02 14:02:19 +0000 UTC" firstStartedPulling="2025-12-02 14:02:53.438097454 +0000 UTC m=+1218.853911305" lastFinishedPulling="2025-12-02 14:03:09.172354398 +0000 UTC m=+1234.588168289" observedRunningTime="2025-12-02 14:03:11.007266088 +0000 UTC m=+1236.423079979" watchObservedRunningTime="2025-12-02 14:03:11.012871866 +0000 UTC m=+1236.428685717" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.330252 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z8g9w"] Dec 02 14:03:11 crc kubenswrapper[4900]: E1202 14:03:11.330840 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661ea294-f433-451c-93a7-91ef2b794e79" containerName="ovn-config" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.330871 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="661ea294-f433-451c-93a7-91ef2b794e79" containerName="ovn-config" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.331195 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="661ea294-f433-451c-93a7-91ef2b794e79" containerName="ovn-config" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.332682 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.335014 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.341374 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z8g9w"] Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.458609 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.458720 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.458757 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz8m\" (UniqueName: \"kubernetes.io/projected/e60294be-bae1-40d6-9ff1-a6931b1989e6-kube-api-access-7mz8m\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.458808 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.458936 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.458989 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-config\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.559915 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.559992 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.560023 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mz8m\" (UniqueName: \"kubernetes.io/projected/e60294be-bae1-40d6-9ff1-a6931b1989e6-kube-api-access-7mz8m\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.560060 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.560114 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.560146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-config\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.561084 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-config\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.561180 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.561183 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.561664 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.561721 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.595606 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mz8m\" (UniqueName: \"kubernetes.io/projected/e60294be-bae1-40d6-9ff1-a6931b1989e6-kube-api-access-7mz8m\") pod \"dnsmasq-dns-5c79d794d7-z8g9w\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:11 crc kubenswrapper[4900]: I1202 14:03:11.652528 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:12 crc kubenswrapper[4900]: I1202 14:03:12.155224 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z8g9w"] Dec 02 14:03:12 crc kubenswrapper[4900]: I1202 14:03:12.990726 4900 generic.go:334] "Generic (PLEG): container finished" podID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerID="30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582" exitCode=0 Dec 02 14:03:12 crc kubenswrapper[4900]: I1202 14:03:12.990812 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" event={"ID":"e60294be-bae1-40d6-9ff1-a6931b1989e6","Type":"ContainerDied","Data":"30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582"} Dec 02 14:03:12 crc kubenswrapper[4900]: I1202 14:03:12.991232 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" event={"ID":"e60294be-bae1-40d6-9ff1-a6931b1989e6","Type":"ContainerStarted","Data":"5665d2d51832ff98763d1ed62bb4311713731e6e3a611987490ebd569228e97f"} Dec 02 14:03:12 crc kubenswrapper[4900]: I1202 14:03:12.999915 4900 generic.go:334] "Generic (PLEG): container finished" podID="51675b3c-124f-44aa-b629-c771287652ef" containerID="a1ffea810a8add4b42ba35ba2e8c0050d0718defcd9ccaeab6fc931cff075942" exitCode=0 Dec 02 14:03:12 crc kubenswrapper[4900]: I1202 14:03:12.999962 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wrtvb" event={"ID":"51675b3c-124f-44aa-b629-c771287652ef","Type":"ContainerDied","Data":"a1ffea810a8add4b42ba35ba2e8c0050d0718defcd9ccaeab6fc931cff075942"} Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.011714 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" event={"ID":"e60294be-bae1-40d6-9ff1-a6931b1989e6","Type":"ContainerStarted","Data":"ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623"} Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.013021 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.065810 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" podStartSLOduration=3.065787764 podStartE2EDuration="3.065787764s" podCreationTimestamp="2025-12-02 14:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:14.058336745 +0000 UTC m=+1239.474150606" watchObservedRunningTime="2025-12-02 14:03:14.065787764 +0000 UTC m=+1239.481601625" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.159883 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.396851 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.515118 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4zn4n"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.529279 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.544907 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zn4n"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.615360 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-879t5"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.618027 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.627869 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5ba5-account-create-update-hqqxz"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.629066 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.630475 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-operator-scripts\") pod \"cinder-db-create-4zn4n\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.630549 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jsc4\" (UniqueName: \"kubernetes.io/projected/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-kube-api-access-8jsc4\") pod \"cinder-db-create-4zn4n\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.632794 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.634144 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-879t5"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.651811 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5ba5-account-create-update-hqqxz"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.664305 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wrtvb" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.738945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509f62ef-848d-46b5-8272-1e94429353cb-operator-scripts\") pod \"barbican-db-create-879t5\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.739010 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb87m\" (UniqueName: \"kubernetes.io/projected/509f62ef-848d-46b5-8272-1e94429353cb-kube-api-access-jb87m\") pod \"barbican-db-create-879t5\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.739051 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/809485b0-485a-437a-93f3-432499b8e2c5-operator-scripts\") pod \"cinder-5ba5-account-create-update-hqqxz\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.739110 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/809485b0-485a-437a-93f3-432499b8e2c5-kube-api-access-59trw\") pod \"cinder-5ba5-account-create-update-hqqxz\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.739158 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-operator-scripts\") pod \"cinder-db-create-4zn4n\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.739206 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jsc4\" (UniqueName: \"kubernetes.io/projected/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-kube-api-access-8jsc4\") pod \"cinder-db-create-4zn4n\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.740263 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-operator-scripts\") pod \"cinder-db-create-4zn4n\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.757219 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jsc4\" (UniqueName: \"kubernetes.io/projected/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-kube-api-access-8jsc4\") pod \"cinder-db-create-4zn4n\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.814463 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lskk6"] Dec 02 14:03:14 crc kubenswrapper[4900]: E1202 14:03:14.814825 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51675b3c-124f-44aa-b629-c771287652ef" containerName="glance-db-sync" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.814841 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="51675b3c-124f-44aa-b629-c771287652ef" containerName="glance-db-sync" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.815008 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="51675b3c-124f-44aa-b629-c771287652ef" containerName="glance-db-sync" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.815545 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.826054 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lskk6"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.840480 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-config-data\") pod \"51675b3c-124f-44aa-b629-c771287652ef\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.840578 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-db-sync-config-data\") pod \"51675b3c-124f-44aa-b629-c771287652ef\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.840618 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-combined-ca-bundle\") pod \"51675b3c-124f-44aa-b629-c771287652ef\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.840760 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd67x\" (UniqueName: \"kubernetes.io/projected/51675b3c-124f-44aa-b629-c771287652ef-kube-api-access-zd67x\") pod \"51675b3c-124f-44aa-b629-c771287652ef\" (UID: \"51675b3c-124f-44aa-b629-c771287652ef\") " Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.841069 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509f62ef-848d-46b5-8272-1e94429353cb-operator-scripts\") pod \"barbican-db-create-879t5\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.841119 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb87m\" (UniqueName: \"kubernetes.io/projected/509f62ef-848d-46b5-8272-1e94429353cb-kube-api-access-jb87m\") pod \"barbican-db-create-879t5\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.841159 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/809485b0-485a-437a-93f3-432499b8e2c5-operator-scripts\") pod \"cinder-5ba5-account-create-update-hqqxz\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.841234 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/809485b0-485a-437a-93f3-432499b8e2c5-kube-api-access-59trw\") pod \"cinder-5ba5-account-create-update-hqqxz\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.842309 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509f62ef-848d-46b5-8272-1e94429353cb-operator-scripts\") pod \"barbican-db-create-879t5\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.843081 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/809485b0-485a-437a-93f3-432499b8e2c5-operator-scripts\") pod \"cinder-5ba5-account-create-update-hqqxz\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.849663 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "51675b3c-124f-44aa-b629-c771287652ef" (UID: "51675b3c-124f-44aa-b629-c771287652ef"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.856839 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.860187 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51675b3c-124f-44aa-b629-c771287652ef-kube-api-access-zd67x" (OuterVolumeSpecName: "kube-api-access-zd67x") pod "51675b3c-124f-44aa-b629-c771287652ef" (UID: "51675b3c-124f-44aa-b629-c771287652ef"). InnerVolumeSpecName "kube-api-access-zd67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.862112 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb87m\" (UniqueName: \"kubernetes.io/projected/509f62ef-848d-46b5-8272-1e94429353cb-kube-api-access-jb87m\") pod \"barbican-db-create-879t5\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " pod="openstack/barbican-db-create-879t5" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.864383 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/809485b0-485a-437a-93f3-432499b8e2c5-kube-api-access-59trw\") pod \"cinder-5ba5-account-create-update-hqqxz\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.883887 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-smkbq"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.889087 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.893203 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.893803 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.893920 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8ncb" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.894033 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.901118 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-smkbq"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.948287 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fc853-4474-46e7-8669-5c132f629baf-operator-scripts\") pod \"neutron-db-create-lskk6\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.948363 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8x7\" (UniqueName: \"kubernetes.io/projected/8e0fc853-4474-46e7-8669-5c132f629baf-kube-api-access-bw8x7\") pod \"neutron-db-create-lskk6\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.948457 4900 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.948469 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd67x\" (UniqueName: \"kubernetes.io/projected/51675b3c-124f-44aa-b629-c771287652ef-kube-api-access-zd67x\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.954830 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1777-account-create-update-h5cks"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.955105 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51675b3c-124f-44aa-b629-c771287652ef" (UID: "51675b3c-124f-44aa-b629-c771287652ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.956738 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.961213 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.972370 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1777-account-create-update-h5cks"] Dec 02 14:03:14 crc kubenswrapper[4900]: I1202 14:03:14.981790 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-config-data" (OuterVolumeSpecName: "config-data") pod "51675b3c-124f-44aa-b629-c771287652ef" (UID: "51675b3c-124f-44aa-b629-c771287652ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.012340 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-879t5" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.017436 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.022429 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wrtvb" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.024873 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wrtvb" event={"ID":"51675b3c-124f-44aa-b629-c771287652ef","Type":"ContainerDied","Data":"2cf38d58c37d457fa20081dfd3426c02b07fcbf5f25094d4c9a96d132566a906"} Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.024897 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cf38d58c37d457fa20081dfd3426c02b07fcbf5f25094d4c9a96d132566a906" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.024911 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4c1f-account-create-update-s546k"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.026057 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.028285 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.038063 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4c1f-account-create-update-s546k"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059061 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms99f\" (UniqueName: \"kubernetes.io/projected/38f3a381-1653-4c21-929c-86e764024d0c-kube-api-access-ms99f\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059158 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fc853-4474-46e7-8669-5c132f629baf-operator-scripts\") pod \"neutron-db-create-lskk6\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059177 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9d9r\" (UniqueName: \"kubernetes.io/projected/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-kube-api-access-g9d9r\") pod \"neutron-1777-account-create-update-h5cks\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059212 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8x7\" (UniqueName: \"kubernetes.io/projected/8e0fc853-4474-46e7-8669-5c132f629baf-kube-api-access-bw8x7\") pod \"neutron-db-create-lskk6\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059235 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-config-data\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059253 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-combined-ca-bundle\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059316 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-operator-scripts\") pod \"neutron-1777-account-create-update-h5cks\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059354 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.059367 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51675b3c-124f-44aa-b629-c771287652ef-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.061833 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fc853-4474-46e7-8669-5c132f629baf-operator-scripts\") pod \"neutron-db-create-lskk6\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.097351 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8x7\" (UniqueName: \"kubernetes.io/projected/8e0fc853-4474-46e7-8669-5c132f629baf-kube-api-access-bw8x7\") pod \"neutron-db-create-lskk6\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.132845 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160460 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swwj\" (UniqueName: \"kubernetes.io/projected/213ca91f-e63f-4f0e-a161-57f4cb101c0f-kube-api-access-7swwj\") pod \"barbican-4c1f-account-create-update-s546k\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160503 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213ca91f-e63f-4f0e-a161-57f4cb101c0f-operator-scripts\") pod \"barbican-4c1f-account-create-update-s546k\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160528 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-operator-scripts\") pod \"neutron-1777-account-create-update-h5cks\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160562 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms99f\" (UniqueName: \"kubernetes.io/projected/38f3a381-1653-4c21-929c-86e764024d0c-kube-api-access-ms99f\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160638 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9d9r\" (UniqueName: \"kubernetes.io/projected/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-kube-api-access-g9d9r\") pod \"neutron-1777-account-create-update-h5cks\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160709 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-config-data\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.160738 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-combined-ca-bundle\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.161404 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-operator-scripts\") pod \"neutron-1777-account-create-update-h5cks\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.164319 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-combined-ca-bundle\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.167448 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-config-data\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.178588 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9d9r\" (UniqueName: \"kubernetes.io/projected/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-kube-api-access-g9d9r\") pod \"neutron-1777-account-create-update-h5cks\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.193404 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms99f\" (UniqueName: \"kubernetes.io/projected/38f3a381-1653-4c21-929c-86e764024d0c-kube-api-access-ms99f\") pod \"keystone-db-sync-smkbq\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.255566 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.266893 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swwj\" (UniqueName: \"kubernetes.io/projected/213ca91f-e63f-4f0e-a161-57f4cb101c0f-kube-api-access-7swwj\") pod \"barbican-4c1f-account-create-update-s546k\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.266951 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213ca91f-e63f-4f0e-a161-57f4cb101c0f-operator-scripts\") pod \"barbican-4c1f-account-create-update-s546k\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.267801 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213ca91f-e63f-4f0e-a161-57f4cb101c0f-operator-scripts\") pod \"barbican-4c1f-account-create-update-s546k\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.284661 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.294915 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swwj\" (UniqueName: \"kubernetes.io/projected/213ca91f-e63f-4f0e-a161-57f4cb101c0f-kube-api-access-7swwj\") pod \"barbican-4c1f-account-create-update-s546k\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.378942 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.434312 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zn4n"] Dec 02 14:03:15 crc kubenswrapper[4900]: W1202 14:03:15.466276 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0c9e2e8_fe2e_45b4_b7ad_5c574139db29.slice/crio-614c5e973640c8c64ce231d7258d1a2961e7041cae205b318ae147e428560bf4 WatchSource:0}: Error finding container 614c5e973640c8c64ce231d7258d1a2961e7041cae205b318ae147e428560bf4: Status 404 returned error can't find the container with id 614c5e973640c8c64ce231d7258d1a2961e7041cae205b318ae147e428560bf4 Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.532033 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z8g9w"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.611706 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-97md2"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.613226 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.624967 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-97md2"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.729219 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5ba5-account-create-update-hqqxz"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.792581 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.792727 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-config\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.792778 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.792824 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246v5\" (UniqueName: \"kubernetes.io/projected/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-kube-api-access-246v5\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.792861 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.792886 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.821600 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lskk6"] Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.837050 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-879t5"] Dec 02 14:03:15 crc kubenswrapper[4900]: W1202 14:03:15.854704 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod509f62ef_848d_46b5_8272_1e94429353cb.slice/crio-b58e2dc9c37d48c4ccd912a2fb7169ad78a1e53cb26545c24a19bd1f46315fbc WatchSource:0}: Error finding container b58e2dc9c37d48c4ccd912a2fb7169ad78a1e53cb26545c24a19bd1f46315fbc: Status 404 returned error can't find the container with id b58e2dc9c37d48c4ccd912a2fb7169ad78a1e53cb26545c24a19bd1f46315fbc Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.894624 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.895718 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-config\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.895839 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.897034 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-246v5\" (UniqueName: \"kubernetes.io/projected/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-kube-api-access-246v5\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.897158 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.897238 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.895660 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.896867 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-config\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.897194 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.898123 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.899845 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.925472 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-246v5\" (UniqueName: \"kubernetes.io/projected/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-kube-api-access-246v5\") pod \"dnsmasq-dns-5f59b8f679-97md2\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:15 crc kubenswrapper[4900]: I1202 14:03:15.972034 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.038941 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lskk6" event={"ID":"8e0fc853-4474-46e7-8669-5c132f629baf","Type":"ContainerStarted","Data":"07938e2354dde55e449381d8d27f7b2da218ba1c08abf53d10770c7e7e705fc8"} Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.043407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5ba5-account-create-update-hqqxz" event={"ID":"809485b0-485a-437a-93f3-432499b8e2c5","Type":"ContainerStarted","Data":"92f13e7550aa487e57b22868adf1fda3d03112223f067c2a250b1b9d4f6a386d"} Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.052560 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-879t5" event={"ID":"509f62ef-848d-46b5-8272-1e94429353cb","Type":"ContainerStarted","Data":"b58e2dc9c37d48c4ccd912a2fb7169ad78a1e53cb26545c24a19bd1f46315fbc"} Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.055201 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-smkbq"] Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.055320 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zn4n" event={"ID":"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29","Type":"ContainerStarted","Data":"83190c57d144ed6efda2e3201abfd9fa166f38ec80761516c2e6b861cd107366"} Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.055389 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerName="dnsmasq-dns" containerID="cri-o://ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623" gracePeriod=10 Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.055399 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zn4n" event={"ID":"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29","Type":"ContainerStarted","Data":"614c5e973640c8c64ce231d7258d1a2961e7041cae205b318ae147e428560bf4"} Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.146451 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4zn4n" podStartSLOduration=2.146435475 podStartE2EDuration="2.146435475s" podCreationTimestamp="2025-12-02 14:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:16.124391057 +0000 UTC m=+1241.540204908" watchObservedRunningTime="2025-12-02 14:03:16.146435475 +0000 UTC m=+1241.562249326" Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.183840 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4c1f-account-create-update-s546k"] Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.213819 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1777-account-create-update-h5cks"] Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.522561 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-97md2"] Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.859759 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.925247 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-sb\") pod \"e60294be-bae1-40d6-9ff1-a6931b1989e6\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.925795 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mz8m\" (UniqueName: \"kubernetes.io/projected/e60294be-bae1-40d6-9ff1-a6931b1989e6-kube-api-access-7mz8m\") pod \"e60294be-bae1-40d6-9ff1-a6931b1989e6\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.925919 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-svc\") pod \"e60294be-bae1-40d6-9ff1-a6931b1989e6\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.925957 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-swift-storage-0\") pod \"e60294be-bae1-40d6-9ff1-a6931b1989e6\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.926035 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-config\") pod \"e60294be-bae1-40d6-9ff1-a6931b1989e6\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.926090 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-nb\") pod \"e60294be-bae1-40d6-9ff1-a6931b1989e6\" (UID: \"e60294be-bae1-40d6-9ff1-a6931b1989e6\") " Dec 02 14:03:16 crc kubenswrapper[4900]: I1202 14:03:16.961003 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60294be-bae1-40d6-9ff1-a6931b1989e6-kube-api-access-7mz8m" (OuterVolumeSpecName: "kube-api-access-7mz8m") pod "e60294be-bae1-40d6-9ff1-a6931b1989e6" (UID: "e60294be-bae1-40d6-9ff1-a6931b1989e6"). InnerVolumeSpecName "kube-api-access-7mz8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.027663 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mz8m\" (UniqueName: \"kubernetes.io/projected/e60294be-bae1-40d6-9ff1-a6931b1989e6-kube-api-access-7mz8m\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.074622 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-smkbq" event={"ID":"38f3a381-1653-4c21-929c-86e764024d0c","Type":"ContainerStarted","Data":"773c794ae2691bd21d73c9d6c7fe2fee9e87ee8a6c00ba7eaedf358e1e75300d"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.075490 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e60294be-bae1-40d6-9ff1-a6931b1989e6" (UID: "e60294be-bae1-40d6-9ff1-a6931b1989e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.079521 4900 generic.go:334] "Generic (PLEG): container finished" podID="8e0fc853-4474-46e7-8669-5c132f629baf" containerID="baef9a5f21779e2a4b34b8427e41273cb4427c3539b16cff9e16ec581a7c2e72" exitCode=0 Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.079954 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lskk6" event={"ID":"8e0fc853-4474-46e7-8669-5c132f629baf","Type":"ContainerDied","Data":"baef9a5f21779e2a4b34b8427e41273cb4427c3539b16cff9e16ec581a7c2e72"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.081874 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-config" (OuterVolumeSpecName: "config") pod "e60294be-bae1-40d6-9ff1-a6931b1989e6" (UID: "e60294be-bae1-40d6-9ff1-a6931b1989e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.084404 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e60294be-bae1-40d6-9ff1-a6931b1989e6" (UID: "e60294be-bae1-40d6-9ff1-a6931b1989e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.084878 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e60294be-bae1-40d6-9ff1-a6931b1989e6" (UID: "e60294be-bae1-40d6-9ff1-a6931b1989e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.088993 4900 generic.go:334] "Generic (PLEG): container finished" podID="809485b0-485a-437a-93f3-432499b8e2c5" containerID="82fed4eee43ecffe9601676c7729bfdbd701095eacc9481a8ceb34fd4b3b0c5d" exitCode=0 Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.089108 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5ba5-account-create-update-hqqxz" event={"ID":"809485b0-485a-437a-93f3-432499b8e2c5","Type":"ContainerDied","Data":"82fed4eee43ecffe9601676c7729bfdbd701095eacc9481a8ceb34fd4b3b0c5d"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.093609 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" event={"ID":"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14","Type":"ContainerStarted","Data":"c245523cafc9a58338623bb6f0c0ab7108b3a44c2890901b8630fad99bea656c"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.097768 4900 generic.go:334] "Generic (PLEG): container finished" podID="509f62ef-848d-46b5-8272-1e94429353cb" containerID="12ecfd9dec64506cfb20c1aa9db5f5e504ac20447db837d5296f0bc7d6ba2db1" exitCode=0 Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.098065 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-879t5" event={"ID":"509f62ef-848d-46b5-8272-1e94429353cb","Type":"ContainerDied","Data":"12ecfd9dec64506cfb20c1aa9db5f5e504ac20447db837d5296f0bc7d6ba2db1"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.098167 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e60294be-bae1-40d6-9ff1-a6931b1989e6" (UID: "e60294be-bae1-40d6-9ff1-a6931b1989e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.099969 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4c1f-account-create-update-s546k" event={"ID":"213ca91f-e63f-4f0e-a161-57f4cb101c0f","Type":"ContainerStarted","Data":"228abee1524ca28344e96e640d2ce55eb6b955c5f7389ea75a0f70333d613ab6"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.100003 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4c1f-account-create-update-s546k" event={"ID":"213ca91f-e63f-4f0e-a161-57f4cb101c0f","Type":"ContainerStarted","Data":"40bf2f71dd2074473d6c6fac236095648ad5e4f085fe669773f9d9a3f04b99bb"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.101413 4900 generic.go:334] "Generic (PLEG): container finished" podID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerID="ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623" exitCode=0 Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.101522 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" event={"ID":"e60294be-bae1-40d6-9ff1-a6931b1989e6","Type":"ContainerDied","Data":"ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.101585 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.101784 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-z8g9w" event={"ID":"e60294be-bae1-40d6-9ff1-a6931b1989e6","Type":"ContainerDied","Data":"5665d2d51832ff98763d1ed62bb4311713731e6e3a611987490ebd569228e97f"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.101807 4900 scope.go:117] "RemoveContainer" containerID="ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.102828 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1777-account-create-update-h5cks" event={"ID":"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729","Type":"ContainerStarted","Data":"f3e31a0037c3864d8fbc36b0641f6e02b3881334b51f070abf36139382054733"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.102986 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1777-account-create-update-h5cks" event={"ID":"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729","Type":"ContainerStarted","Data":"1c3bb2b20045e9835e9ae68889203eb61913144b8f7ae16557f6cdb3e5e9687b"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.107237 4900 generic.go:334] "Generic (PLEG): container finished" podID="b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" containerID="83190c57d144ed6efda2e3201abfd9fa166f38ec80761516c2e6b861cd107366" exitCode=0 Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.107449 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zn4n" event={"ID":"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29","Type":"ContainerDied","Data":"83190c57d144ed6efda2e3201abfd9fa166f38ec80761516c2e6b861cd107366"} Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.131298 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.131539 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.131668 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.131757 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.131859 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60294be-bae1-40d6-9ff1-a6931b1989e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.149191 4900 scope.go:117] "RemoveContainer" containerID="30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.192863 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-1777-account-create-update-h5cks" podStartSLOduration=3.192829951 podStartE2EDuration="3.192829951s" podCreationTimestamp="2025-12-02 14:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:17.186617487 +0000 UTC m=+1242.602431338" watchObservedRunningTime="2025-12-02 14:03:17.192829951 +0000 UTC m=+1242.608643802" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.216762 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4c1f-account-create-update-s546k" podStartSLOduration=3.2167410419999998 podStartE2EDuration="3.216741042s" podCreationTimestamp="2025-12-02 14:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:17.20778252 +0000 UTC m=+1242.623596371" watchObservedRunningTime="2025-12-02 14:03:17.216741042 +0000 UTC m=+1242.632554893" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.304327 4900 scope.go:117] "RemoveContainer" containerID="ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623" Dec 02 14:03:17 crc kubenswrapper[4900]: E1202 14:03:17.304966 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623\": container with ID starting with ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623 not found: ID does not exist" containerID="ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.305010 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623"} err="failed to get container status \"ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623\": rpc error: code = NotFound desc = could not find container \"ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623\": container with ID starting with ecc5af167c3ebc9fe949329edbffc7cb00f3172da833157890b880abc76df623 not found: ID does not exist" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.305049 4900 scope.go:117] "RemoveContainer" containerID="30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582" Dec 02 14:03:17 crc kubenswrapper[4900]: E1202 14:03:17.315948 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582\": container with ID starting with 30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582 not found: ID does not exist" containerID="30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.316004 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582"} err="failed to get container status \"30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582\": rpc error: code = NotFound desc = could not find container \"30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582\": container with ID starting with 30c333b827d7b114b00211ab97d3d6daf8d1fcb1d9c77e462a88f640fd565582 not found: ID does not exist" Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.315954 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z8g9w"] Dec 02 14:03:17 crc kubenswrapper[4900]: I1202 14:03:17.327896 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-z8g9w"] Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.117127 4900 generic.go:334] "Generic (PLEG): container finished" podID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerID="f6f49c300a7a4ebceb502106fd1d9f7bd28e4e090c1d8f2e3a1ee796369a40a5" exitCode=0 Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.117196 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" event={"ID":"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14","Type":"ContainerDied","Data":"f6f49c300a7a4ebceb502106fd1d9f7bd28e4e090c1d8f2e3a1ee796369a40a5"} Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.119737 4900 generic.go:334] "Generic (PLEG): container finished" podID="afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" containerID="f3e31a0037c3864d8fbc36b0641f6e02b3881334b51f070abf36139382054733" exitCode=0 Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.119777 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1777-account-create-update-h5cks" event={"ID":"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729","Type":"ContainerDied","Data":"f3e31a0037c3864d8fbc36b0641f6e02b3881334b51f070abf36139382054733"} Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.147206 4900 generic.go:334] "Generic (PLEG): container finished" podID="213ca91f-e63f-4f0e-a161-57f4cb101c0f" containerID="228abee1524ca28344e96e640d2ce55eb6b955c5f7389ea75a0f70333d613ab6" exitCode=0 Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.147577 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4c1f-account-create-update-s546k" event={"ID":"213ca91f-e63f-4f0e-a161-57f4cb101c0f","Type":"ContainerDied","Data":"228abee1524ca28344e96e640d2ce55eb6b955c5f7389ea75a0f70333d613ab6"} Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.493482 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.557417 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fc853-4474-46e7-8669-5c132f629baf-operator-scripts\") pod \"8e0fc853-4474-46e7-8669-5c132f629baf\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.557492 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw8x7\" (UniqueName: \"kubernetes.io/projected/8e0fc853-4474-46e7-8669-5c132f629baf-kube-api-access-bw8x7\") pod \"8e0fc853-4474-46e7-8669-5c132f629baf\" (UID: \"8e0fc853-4474-46e7-8669-5c132f629baf\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.560723 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e0fc853-4474-46e7-8669-5c132f629baf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e0fc853-4474-46e7-8669-5c132f629baf" (UID: "8e0fc853-4474-46e7-8669-5c132f629baf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.564142 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0fc853-4474-46e7-8669-5c132f629baf-kube-api-access-bw8x7" (OuterVolumeSpecName: "kube-api-access-bw8x7") pod "8e0fc853-4474-46e7-8669-5c132f629baf" (UID: "8e0fc853-4474-46e7-8669-5c132f629baf"). InnerVolumeSpecName "kube-api-access-bw8x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.663820 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e0fc853-4474-46e7-8669-5c132f629baf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.664256 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw8x7\" (UniqueName: \"kubernetes.io/projected/8e0fc853-4474-46e7-8669-5c132f629baf-kube-api-access-bw8x7\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.705112 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-879t5" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.721583 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.731252 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.765245 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/809485b0-485a-437a-93f3-432499b8e2c5-operator-scripts\") pod \"809485b0-485a-437a-93f3-432499b8e2c5\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.765303 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb87m\" (UniqueName: \"kubernetes.io/projected/509f62ef-848d-46b5-8272-1e94429353cb-kube-api-access-jb87m\") pod \"509f62ef-848d-46b5-8272-1e94429353cb\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.765346 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jsc4\" (UniqueName: \"kubernetes.io/projected/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-kube-api-access-8jsc4\") pod \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.765383 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-operator-scripts\") pod \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\" (UID: \"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.765403 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509f62ef-848d-46b5-8272-1e94429353cb-operator-scripts\") pod \"509f62ef-848d-46b5-8272-1e94429353cb\" (UID: \"509f62ef-848d-46b5-8272-1e94429353cb\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.765451 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/809485b0-485a-437a-93f3-432499b8e2c5-kube-api-access-59trw\") pod \"809485b0-485a-437a-93f3-432499b8e2c5\" (UID: \"809485b0-485a-437a-93f3-432499b8e2c5\") " Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.766508 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809485b0-485a-437a-93f3-432499b8e2c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "809485b0-485a-437a-93f3-432499b8e2c5" (UID: "809485b0-485a-437a-93f3-432499b8e2c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.767076 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" (UID: "b0c9e2e8-fe2e-45b4-b7ad-5c574139db29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.767108 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/509f62ef-848d-46b5-8272-1e94429353cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "509f62ef-848d-46b5-8272-1e94429353cb" (UID: "509f62ef-848d-46b5-8272-1e94429353cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.770667 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-kube-api-access-8jsc4" (OuterVolumeSpecName: "kube-api-access-8jsc4") pod "b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" (UID: "b0c9e2e8-fe2e-45b4-b7ad-5c574139db29"). InnerVolumeSpecName "kube-api-access-8jsc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.770725 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809485b0-485a-437a-93f3-432499b8e2c5-kube-api-access-59trw" (OuterVolumeSpecName: "kube-api-access-59trw") pod "809485b0-485a-437a-93f3-432499b8e2c5" (UID: "809485b0-485a-437a-93f3-432499b8e2c5"). InnerVolumeSpecName "kube-api-access-59trw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.778410 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509f62ef-848d-46b5-8272-1e94429353cb-kube-api-access-jb87m" (OuterVolumeSpecName: "kube-api-access-jb87m") pod "509f62ef-848d-46b5-8272-1e94429353cb" (UID: "509f62ef-848d-46b5-8272-1e94429353cb"). InnerVolumeSpecName "kube-api-access-jb87m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.870221 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/809485b0-485a-437a-93f3-432499b8e2c5-kube-api-access-59trw\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.870264 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/809485b0-485a-437a-93f3-432499b8e2c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.870277 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb87m\" (UniqueName: \"kubernetes.io/projected/509f62ef-848d-46b5-8272-1e94429353cb-kube-api-access-jb87m\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.870289 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jsc4\" (UniqueName: \"kubernetes.io/projected/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-kube-api-access-8jsc4\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.870299 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.870307 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509f62ef-848d-46b5-8272-1e94429353cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:18 crc kubenswrapper[4900]: I1202 14:03:18.922896 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" path="/var/lib/kubelet/pods/e60294be-bae1-40d6-9ff1-a6931b1989e6/volumes" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.159596 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zn4n" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.159694 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zn4n" event={"ID":"b0c9e2e8-fe2e-45b4-b7ad-5c574139db29","Type":"ContainerDied","Data":"614c5e973640c8c64ce231d7258d1a2961e7041cae205b318ae147e428560bf4"} Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.159747 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614c5e973640c8c64ce231d7258d1a2961e7041cae205b318ae147e428560bf4" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.162539 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lskk6" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.162688 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lskk6" event={"ID":"8e0fc853-4474-46e7-8669-5c132f629baf","Type":"ContainerDied","Data":"07938e2354dde55e449381d8d27f7b2da218ba1c08abf53d10770c7e7e705fc8"} Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.162753 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07938e2354dde55e449381d8d27f7b2da218ba1c08abf53d10770c7e7e705fc8" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.165220 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5ba5-account-create-update-hqqxz" event={"ID":"809485b0-485a-437a-93f3-432499b8e2c5","Type":"ContainerDied","Data":"92f13e7550aa487e57b22868adf1fda3d03112223f067c2a250b1b9d4f6a386d"} Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.165354 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f13e7550aa487e57b22868adf1fda3d03112223f067c2a250b1b9d4f6a386d" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.165242 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5ba5-account-create-update-hqqxz" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.169019 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" event={"ID":"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14","Type":"ContainerStarted","Data":"22a629147b9c6048f5eb1696402fbed4a1fe15570dae11f64ecaa9bdca18840b"} Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.169366 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.171047 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-879t5" event={"ID":"509f62ef-848d-46b5-8272-1e94429353cb","Type":"ContainerDied","Data":"b58e2dc9c37d48c4ccd912a2fb7169ad78a1e53cb26545c24a19bd1f46315fbc"} Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.171174 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58e2dc9c37d48c4ccd912a2fb7169ad78a1e53cb26545c24a19bd1f46315fbc" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.171086 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-879t5" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.192337 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" podStartSLOduration=4.192217415 podStartE2EDuration="4.192217415s" podCreationTimestamp="2025-12-02 14:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:19.189804047 +0000 UTC m=+1244.605617908" watchObservedRunningTime="2025-12-02 14:03:19.192217415 +0000 UTC m=+1244.608031276" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.526065 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.587706 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9d9r\" (UniqueName: \"kubernetes.io/projected/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-kube-api-access-g9d9r\") pod \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.587862 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-operator-scripts\") pod \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\" (UID: \"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729\") " Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.589156 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" (UID: "afb303d7-3e16-4b92-b0b0-d0ce4b6ca729"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.596000 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-kube-api-access-g9d9r" (OuterVolumeSpecName: "kube-api-access-g9d9r") pod "afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" (UID: "afb303d7-3e16-4b92-b0b0-d0ce4b6ca729"). InnerVolumeSpecName "kube-api-access-g9d9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.659832 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.689456 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9d9r\" (UniqueName: \"kubernetes.io/projected/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-kube-api-access-g9d9r\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.689484 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:19 crc kubenswrapper[4900]: I1202 14:03:19.794152 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213ca91f-e63f-4f0e-a161-57f4cb101c0f-operator-scripts\") pod \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:19.794364 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7swwj\" (UniqueName: \"kubernetes.io/projected/213ca91f-e63f-4f0e-a161-57f4cb101c0f-kube-api-access-7swwj\") pod \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\" (UID: \"213ca91f-e63f-4f0e-a161-57f4cb101c0f\") " Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:19.795439 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213ca91f-e63f-4f0e-a161-57f4cb101c0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "213ca91f-e63f-4f0e-a161-57f4cb101c0f" (UID: "213ca91f-e63f-4f0e-a161-57f4cb101c0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:19.821275 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213ca91f-e63f-4f0e-a161-57f4cb101c0f-kube-api-access-7swwj" (OuterVolumeSpecName: "kube-api-access-7swwj") pod "213ca91f-e63f-4f0e-a161-57f4cb101c0f" (UID: "213ca91f-e63f-4f0e-a161-57f4cb101c0f"). InnerVolumeSpecName "kube-api-access-7swwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:19.896625 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7swwj\" (UniqueName: \"kubernetes.io/projected/213ca91f-e63f-4f0e-a161-57f4cb101c0f-kube-api-access-7swwj\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:19.896674 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213ca91f-e63f-4f0e-a161-57f4cb101c0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:20.200896 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1777-account-create-update-h5cks" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:20.200896 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1777-account-create-update-h5cks" event={"ID":"afb303d7-3e16-4b92-b0b0-d0ce4b6ca729","Type":"ContainerDied","Data":"1c3bb2b20045e9835e9ae68889203eb61913144b8f7ae16557f6cdb3e5e9687b"} Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:20.201834 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3bb2b20045e9835e9ae68889203eb61913144b8f7ae16557f6cdb3e5e9687b" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:20.203610 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4c1f-account-create-update-s546k" Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:20.203629 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4c1f-account-create-update-s546k" event={"ID":"213ca91f-e63f-4f0e-a161-57f4cb101c0f","Type":"ContainerDied","Data":"40bf2f71dd2074473d6c6fac236095648ad5e4f085fe669773f9d9a3f04b99bb"} Dec 02 14:03:20 crc kubenswrapper[4900]: I1202 14:03:20.203715 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bf2f71dd2074473d6c6fac236095648ad5e4f085fe669773f9d9a3f04b99bb" Dec 02 14:03:24 crc kubenswrapper[4900]: I1202 14:03:24.251707 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-smkbq" event={"ID":"38f3a381-1653-4c21-929c-86e764024d0c","Type":"ContainerStarted","Data":"db79db95d70a03b981ec7d9c93ec8d397d6d181c35dc0babe9571c80f20a28b3"} Dec 02 14:03:24 crc kubenswrapper[4900]: I1202 14:03:24.276612 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-smkbq" podStartSLOduration=3.048240526 podStartE2EDuration="10.276587227s" podCreationTimestamp="2025-12-02 14:03:14 +0000 UTC" firstStartedPulling="2025-12-02 14:03:16.071891934 +0000 UTC m=+1241.487705785" lastFinishedPulling="2025-12-02 14:03:23.300238625 +0000 UTC m=+1248.716052486" observedRunningTime="2025-12-02 14:03:24.272280456 +0000 UTC m=+1249.688094347" watchObservedRunningTime="2025-12-02 14:03:24.276587227 +0000 UTC m=+1249.692401118" Dec 02 14:03:25 crc kubenswrapper[4900]: I1202 14:03:25.974699 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:26 crc kubenswrapper[4900]: I1202 14:03:26.042577 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6rvcj"] Dec 02 14:03:26 crc kubenswrapper[4900]: I1202 14:03:26.042991 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" containerName="dnsmasq-dns" containerID="cri-o://cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e" gracePeriod=10 Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.040686 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.191370 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-config\") pod \"a56b8ea0-ea80-4320-927c-14e52f803593\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.191574 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg45d\" (UniqueName: \"kubernetes.io/projected/a56b8ea0-ea80-4320-927c-14e52f803593-kube-api-access-jg45d\") pod \"a56b8ea0-ea80-4320-927c-14e52f803593\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.191625 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-sb\") pod \"a56b8ea0-ea80-4320-927c-14e52f803593\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.191682 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-nb\") pod \"a56b8ea0-ea80-4320-927c-14e52f803593\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.191787 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-dns-svc\") pod \"a56b8ea0-ea80-4320-927c-14e52f803593\" (UID: \"a56b8ea0-ea80-4320-927c-14e52f803593\") " Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.197223 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56b8ea0-ea80-4320-927c-14e52f803593-kube-api-access-jg45d" (OuterVolumeSpecName: "kube-api-access-jg45d") pod "a56b8ea0-ea80-4320-927c-14e52f803593" (UID: "a56b8ea0-ea80-4320-927c-14e52f803593"). InnerVolumeSpecName "kube-api-access-jg45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.233595 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a56b8ea0-ea80-4320-927c-14e52f803593" (UID: "a56b8ea0-ea80-4320-927c-14e52f803593"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.245276 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-config" (OuterVolumeSpecName: "config") pod "a56b8ea0-ea80-4320-927c-14e52f803593" (UID: "a56b8ea0-ea80-4320-927c-14e52f803593"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.248023 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a56b8ea0-ea80-4320-927c-14e52f803593" (UID: "a56b8ea0-ea80-4320-927c-14e52f803593"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.262699 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a56b8ea0-ea80-4320-927c-14e52f803593" (UID: "a56b8ea0-ea80-4320-927c-14e52f803593"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.283612 4900 generic.go:334] "Generic (PLEG): container finished" podID="a56b8ea0-ea80-4320-927c-14e52f803593" containerID="cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e" exitCode=0 Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.283681 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" event={"ID":"a56b8ea0-ea80-4320-927c-14e52f803593","Type":"ContainerDied","Data":"cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e"} Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.283707 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.283723 4900 scope.go:117] "RemoveContainer" containerID="cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.283714 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-6rvcj" event={"ID":"a56b8ea0-ea80-4320-927c-14e52f803593","Type":"ContainerDied","Data":"51449dbe615d367a74c7d0663604c130899b40be6456391f513a737ea73e6e2e"} Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.293990 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.294014 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg45d\" (UniqueName: \"kubernetes.io/projected/a56b8ea0-ea80-4320-927c-14e52f803593-kube-api-access-jg45d\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.294025 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.294034 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.294042 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a56b8ea0-ea80-4320-927c-14e52f803593-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.342442 4900 scope.go:117] "RemoveContainer" containerID="8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.343883 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6rvcj"] Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.351753 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-6rvcj"] Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.362761 4900 scope.go:117] "RemoveContainer" containerID="cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e" Dec 02 14:03:27 crc kubenswrapper[4900]: E1202 14:03:27.365783 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e\": container with ID starting with cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e not found: ID does not exist" containerID="cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.365854 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e"} err="failed to get container status \"cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e\": rpc error: code = NotFound desc = could not find container \"cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e\": container with ID starting with cc025ab7f65e01c92fa7725566d0fa6aebf80407506eb42903afd359fe7fb43e not found: ID does not exist" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.365908 4900 scope.go:117] "RemoveContainer" containerID="8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad" Dec 02 14:03:27 crc kubenswrapper[4900]: E1202 14:03:27.366231 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad\": container with ID starting with 8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad not found: ID does not exist" containerID="8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad" Dec 02 14:03:27 crc kubenswrapper[4900]: I1202 14:03:27.366263 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad"} err="failed to get container status \"8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad\": rpc error: code = NotFound desc = could not find container \"8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad\": container with ID starting with 8b35a486d2f9ca3ff334c5ef5f8ab059e4f6b8fa6319de757faaf9587eca4cad not found: ID does not exist" Dec 02 14:03:28 crc kubenswrapper[4900]: I1202 14:03:28.299940 4900 generic.go:334] "Generic (PLEG): container finished" podID="38f3a381-1653-4c21-929c-86e764024d0c" containerID="db79db95d70a03b981ec7d9c93ec8d397d6d181c35dc0babe9571c80f20a28b3" exitCode=0 Dec 02 14:03:28 crc kubenswrapper[4900]: I1202 14:03:28.300180 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-smkbq" event={"ID":"38f3a381-1653-4c21-929c-86e764024d0c","Type":"ContainerDied","Data":"db79db95d70a03b981ec7d9c93ec8d397d6d181c35dc0babe9571c80f20a28b3"} Dec 02 14:03:28 crc kubenswrapper[4900]: I1202 14:03:28.921576 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" path="/var/lib/kubelet/pods/a56b8ea0-ea80-4320-927c-14e52f803593/volumes" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.763388 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.852147 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-combined-ca-bundle\") pod \"38f3a381-1653-4c21-929c-86e764024d0c\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.852257 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-config-data\") pod \"38f3a381-1653-4c21-929c-86e764024d0c\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.852357 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms99f\" (UniqueName: \"kubernetes.io/projected/38f3a381-1653-4c21-929c-86e764024d0c-kube-api-access-ms99f\") pod \"38f3a381-1653-4c21-929c-86e764024d0c\" (UID: \"38f3a381-1653-4c21-929c-86e764024d0c\") " Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.861113 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f3a381-1653-4c21-929c-86e764024d0c-kube-api-access-ms99f" (OuterVolumeSpecName: "kube-api-access-ms99f") pod "38f3a381-1653-4c21-929c-86e764024d0c" (UID: "38f3a381-1653-4c21-929c-86e764024d0c"). InnerVolumeSpecName "kube-api-access-ms99f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.896269 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38f3a381-1653-4c21-929c-86e764024d0c" (UID: "38f3a381-1653-4c21-929c-86e764024d0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.945596 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-config-data" (OuterVolumeSpecName: "config-data") pod "38f3a381-1653-4c21-929c-86e764024d0c" (UID: "38f3a381-1653-4c21-929c-86e764024d0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.962290 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.963388 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f3a381-1653-4c21-929c-86e764024d0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:29 crc kubenswrapper[4900]: I1202 14:03:29.963404 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms99f\" (UniqueName: \"kubernetes.io/projected/38f3a381-1653-4c21-929c-86e764024d0c-kube-api-access-ms99f\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.325973 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-smkbq" event={"ID":"38f3a381-1653-4c21-929c-86e764024d0c","Type":"ContainerDied","Data":"773c794ae2691bd21d73c9d6c7fe2fee9e87ee8a6c00ba7eaedf358e1e75300d"} Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.326017 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773c794ae2691bd21d73c9d6c7fe2fee9e87ee8a6c00ba7eaedf358e1e75300d" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.326101 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-smkbq" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594270 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6kgbc"] Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594578 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerName="dnsmasq-dns" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594593 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerName="dnsmasq-dns" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594609 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerName="init" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594615 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerName="init" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594624 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594631 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594660 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509f62ef-848d-46b5-8272-1e94429353cb" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594666 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="509f62ef-848d-46b5-8272-1e94429353cb" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594680 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" containerName="init" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594686 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" containerName="init" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594707 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0fc853-4474-46e7-8669-5c132f629baf" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594714 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0fc853-4474-46e7-8669-5c132f629baf" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594728 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213ca91f-e63f-4f0e-a161-57f4cb101c0f" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594737 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="213ca91f-e63f-4f0e-a161-57f4cb101c0f" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594748 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594756 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594763 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f3a381-1653-4c21-929c-86e764024d0c" containerName="keystone-db-sync" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594770 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f3a381-1653-4c21-929c-86e764024d0c" containerName="keystone-db-sync" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594780 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809485b0-485a-437a-93f3-432499b8e2c5" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594788 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="809485b0-485a-437a-93f3-432499b8e2c5" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: E1202 14:03:30.594805 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" containerName="dnsmasq-dns" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594813 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" containerName="dnsmasq-dns" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594968 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56b8ea0-ea80-4320-927c-14e52f803593" containerName="dnsmasq-dns" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594982 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="213ca91f-e63f-4f0e-a161-57f4cb101c0f" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594991 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f3a381-1653-4c21-929c-86e764024d0c" containerName="keystone-db-sync" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.594999 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0fc853-4474-46e7-8669-5c132f629baf" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.595011 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.595022 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60294be-bae1-40d6-9ff1-a6931b1989e6" containerName="dnsmasq-dns" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.595034 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="509f62ef-848d-46b5-8272-1e94429353cb" containerName="mariadb-database-create" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.595046 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="809485b0-485a-437a-93f3-432499b8e2c5" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.595053 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" containerName="mariadb-account-create-update" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.595575 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.602898 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.602936 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8ncb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.603046 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.603156 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.603658 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.610844 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-xzwhw"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.612547 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.634701 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6kgbc"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.642445 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-xzwhw"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.773988 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-config-data\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774041 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-combined-ca-bundle\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774071 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-config\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774749 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774832 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774875 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774930 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.774961 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9tf\" (UniqueName: \"kubernetes.io/projected/49e6d834-fae4-4961-8866-9f49914bd21d-kube-api-access-ld9tf\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.775093 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-credential-keys\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.775121 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-fernet-keys\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.775147 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqgx\" (UniqueName: \"kubernetes.io/projected/c27acb21-bfee-4e37-9f07-38bf334a5b5c-kube-api-access-fgqgx\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.775190 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-scripts\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.778415 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b9s52"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.779413 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.783579 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.783758 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.783850 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-psqf2" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.793816 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6fhqb"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.794813 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.796436 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.796912 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-g5mcs" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.797138 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.845717 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6fhqb"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879487 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd24b5dd-8bba-467d-977a-cbd11c05e52b-etc-machine-id\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879535 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-combined-ca-bundle\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-config-data\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879591 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-combined-ca-bundle\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879617 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-config\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879662 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-config\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879684 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879711 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-db-sync-config-data\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879734 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879761 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879785 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879802 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67sg\" (UniqueName: \"kubernetes.io/projected/fd24b5dd-8bba-467d-977a-cbd11c05e52b-kube-api-access-n67sg\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879833 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9tf\" (UniqueName: \"kubernetes.io/projected/49e6d834-fae4-4961-8866-9f49914bd21d-kube-api-access-ld9tf\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879852 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkx4v\" (UniqueName: \"kubernetes.io/projected/9f87d7bb-973b-441c-9bb7-18a6e9532691-kube-api-access-fkx4v\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879876 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-combined-ca-bundle\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879957 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-credential-keys\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879976 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-config-data\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.879992 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-fernet-keys\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.880014 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgqgx\" (UniqueName: \"kubernetes.io/projected/c27acb21-bfee-4e37-9f07-38bf334a5b5c-kube-api-access-fgqgx\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.880036 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-scripts\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.880059 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-scripts\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.880814 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.881377 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-config\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.888683 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.889810 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.890339 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.932193 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-credential-keys\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.937190 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-config-data\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.937693 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgqgx\" (UniqueName: \"kubernetes.io/projected/c27acb21-bfee-4e37-9f07-38bf334a5b5c-kube-api-access-fgqgx\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.938284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-fernet-keys\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.953586 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9tf\" (UniqueName: \"kubernetes.io/projected/49e6d834-fae4-4961-8866-9f49914bd21d-kube-api-access-ld9tf\") pod \"dnsmasq-dns-bbf5cc879-xzwhw\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.954080 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-combined-ca-bundle\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.963605 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-scripts\") pod \"keystone-bootstrap-6kgbc\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.964920 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9s52"] Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.980813 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-combined-ca-bundle\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.980889 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-config-data\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.980912 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-scripts\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.980951 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd24b5dd-8bba-467d-977a-cbd11c05e52b-etc-machine-id\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.980969 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-combined-ca-bundle\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.981036 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-config\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.981064 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-db-sync-config-data\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.981092 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n67sg\" (UniqueName: \"kubernetes.io/projected/fd24b5dd-8bba-467d-977a-cbd11c05e52b-kube-api-access-n67sg\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.981111 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkx4v\" (UniqueName: \"kubernetes.io/projected/9f87d7bb-973b-441c-9bb7-18a6e9532691-kube-api-access-fkx4v\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.986435 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd24b5dd-8bba-467d-977a-cbd11c05e52b-etc-machine-id\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.996787 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-config\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.998856 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-combined-ca-bundle\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:30 crc kubenswrapper[4900]: I1202 14:03:30.999024 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-combined-ca-bundle\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.002134 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-db-sync-config-data\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.010193 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-scripts\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.010710 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkx4v\" (UniqueName: \"kubernetes.io/projected/9f87d7bb-973b-441c-9bb7-18a6e9532691-kube-api-access-fkx4v\") pod \"neutron-db-sync-6fhqb\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.023026 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7h46p"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.024654 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.033084 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-config-data\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.038299 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n67sg\" (UniqueName: \"kubernetes.io/projected/fd24b5dd-8bba-467d-977a-cbd11c05e52b-kube-api-access-n67sg\") pod \"cinder-db-sync-b9s52\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.038880 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dj9wt" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.039085 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.054318 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7h46p"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.076893 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.079172 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.083169 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4nw8\" (UniqueName: \"kubernetes.io/projected/433a8e08-2261-425b-97d1-2b61ad9ae5f9-kube-api-access-g4nw8\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.083209 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-combined-ca-bundle\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.083226 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-run-httpd\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.084395 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-config-data\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.084426 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.084533 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-db-sync-config-data\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.085135 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.085159 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.086264 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-scripts\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.086325 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-log-httpd\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.086425 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4r4\" (UniqueName: \"kubernetes.io/projected/7805c9b7-1be2-499f-b3c9-939245983c97-kube-api-access-7v4r4\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.086623 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.088623 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.096063 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9s52" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.105571 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4dw9x"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.119374 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.120356 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.126974 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.127056 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2ps98" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.129788 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.134007 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4dw9x"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.148473 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-xzwhw"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.149204 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.156244 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zzsgk"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.157810 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.167191 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zzsgk"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187535 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-scripts\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187582 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187602 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-log-httpd\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187627 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187815 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4r4\" (UniqueName: \"kubernetes.io/projected/7805c9b7-1be2-499f-b3c9-939245983c97-kube-api-access-7v4r4\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187848 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-config-data\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187887 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187916 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fppn\" (UniqueName: \"kubernetes.io/projected/725f3563-28dc-40f8-b01e-ecc75598997d-kube-api-access-8fppn\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187963 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66jj2\" (UniqueName: \"kubernetes.io/projected/95452ca6-e25a-44d1-a666-eb99c921ae7c-kube-api-access-66jj2\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.187989 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4nw8\" (UniqueName: \"kubernetes.io/projected/433a8e08-2261-425b-97d1-2b61ad9ae5f9-kube-api-access-g4nw8\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.188014 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-scripts\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.188133 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-combined-ca-bundle\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189245 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-run-httpd\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189621 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-run-httpd\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189761 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-config\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189796 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-config-data\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189835 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189862 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189967 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-combined-ca-bundle\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.189996 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.190036 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95452ca6-e25a-44d1-a666-eb99c921ae7c-logs\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.190413 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-db-sync-config-data\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.190967 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-log-httpd\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.195153 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-scripts\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.197140 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-config-data\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.200467 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.201191 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.201761 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-combined-ca-bundle\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.202587 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-db-sync-config-data\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.206531 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4r4\" (UniqueName: \"kubernetes.io/projected/7805c9b7-1be2-499f-b3c9-939245983c97-kube-api-access-7v4r4\") pod \"ceilometer-0\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.207373 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4nw8\" (UniqueName: \"kubernetes.io/projected/433a8e08-2261-425b-97d1-2b61ad9ae5f9-kube-api-access-g4nw8\") pod \"barbican-db-sync-7h46p\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.217126 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.293291 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.294965 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295010 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-config-data\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295044 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fppn\" (UniqueName: \"kubernetes.io/projected/725f3563-28dc-40f8-b01e-ecc75598997d-kube-api-access-8fppn\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295063 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66jj2\" (UniqueName: \"kubernetes.io/projected/95452ca6-e25a-44d1-a666-eb99c921ae7c-kube-api-access-66jj2\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295089 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-scripts\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295119 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-config\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295147 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295185 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-combined-ca-bundle\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295202 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295223 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95452ca6-e25a-44d1-a666-eb99c921ae7c-logs\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.295489 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95452ca6-e25a-44d1-a666-eb99c921ae7c-logs\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.294891 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.296052 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.298053 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.298582 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.299675 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-config\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.300635 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-combined-ca-bundle\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.304029 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-scripts\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.304073 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-config-data\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.314818 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fppn\" (UniqueName: \"kubernetes.io/projected/725f3563-28dc-40f8-b01e-ecc75598997d-kube-api-access-8fppn\") pod \"dnsmasq-dns-56df8fb6b7-zzsgk\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.315308 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66jj2\" (UniqueName: \"kubernetes.io/projected/95452ca6-e25a-44d1-a666-eb99c921ae7c-kube-api-access-66jj2\") pod \"placement-db-sync-4dw9x\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.373464 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7h46p" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.436245 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.444631 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4dw9x" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.485844 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.643361 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6fhqb"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.653495 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9s52"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.679581 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.684700 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.688577 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.688891 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.689120 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.689412 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7hphq" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703292 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703335 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703364 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-logs\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703422 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-config-data\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703468 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703487 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703561 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-scripts\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.703582 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrprj\" (UniqueName: \"kubernetes.io/projected/5569b9cf-2f57-4aac-a343-95348500e0a3-kube-api-access-zrprj\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.710162 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804727 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-config-data\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804805 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804822 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804865 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-scripts\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804880 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrprj\" (UniqueName: \"kubernetes.io/projected/5569b9cf-2f57-4aac-a343-95348500e0a3-kube-api-access-zrprj\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804906 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804921 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.804949 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-logs\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.805372 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-logs\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.809857 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.811774 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.815635 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-scripts\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.817388 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.820800 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-config-data\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.825000 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.830351 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrprj\" (UniqueName: \"kubernetes.io/projected/5569b9cf-2f57-4aac-a343-95348500e0a3-kube-api-access-zrprj\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.841507 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6kgbc"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.853358 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.854830 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.856941 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.857344 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.861626 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.871475 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-xzwhw"] Dec 02 14:03:31 crc kubenswrapper[4900]: I1202 14:03:31.881866 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008072 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008130 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008174 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-logs\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008195 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k8br\" (UniqueName: \"kubernetes.io/projected/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-kube-api-access-7k8br\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008238 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008294 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008314 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.008349 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.051113 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.090401 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110054 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110119 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110167 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110244 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110276 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110317 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-logs\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110345 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k8br\" (UniqueName: \"kubernetes.io/projected/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-kube-api-access-7k8br\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110410 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.110728 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.112172 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.112462 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-logs\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.114247 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.115136 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.118211 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.120523 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.157930 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k8br\" (UniqueName: \"kubernetes.io/projected/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-kube-api-access-7k8br\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.158870 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.162538 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7h46p"] Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.186537 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4dw9x"] Dec 02 14:03:32 crc kubenswrapper[4900]: W1202 14:03:32.282822 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7805c9b7_1be2_499f_b3c9_939245983c97.slice/crio-32e17f7ee317c07a8c4b2696adddd3ee61ba47d2c3bd4ceb1b0d5f2891900d79 WatchSource:0}: Error finding container 32e17f7ee317c07a8c4b2696adddd3ee61ba47d2c3bd4ceb1b0d5f2891900d79: Status 404 returned error can't find the container with id 32e17f7ee317c07a8c4b2696adddd3ee61ba47d2c3bd4ceb1b0d5f2891900d79 Dec 02 14:03:32 crc kubenswrapper[4900]: W1202 14:03:32.286237 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49e6d834_fae4_4961_8866_9f49914bd21d.slice/crio-32beecc6e844392a1f1845fd14f7d6099dbcac1774ff504fb7fa6d6c56f23162 WatchSource:0}: Error finding container 32beecc6e844392a1f1845fd14f7d6099dbcac1774ff504fb7fa6d6c56f23162: Status 404 returned error can't find the container with id 32beecc6e844392a1f1845fd14f7d6099dbcac1774ff504fb7fa6d6c56f23162 Dec 02 14:03:32 crc kubenswrapper[4900]: W1202 14:03:32.319593 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27acb21_bfee_4e37_9f07_38bf334a5b5c.slice/crio-5cf63820ce0772d05fce1679a96b997d4b8411560da201a331ca7fe489abd834 WatchSource:0}: Error finding container 5cf63820ce0772d05fce1679a96b997d4b8411560da201a331ca7fe489abd834: Status 404 returned error can't find the container with id 5cf63820ce0772d05fce1679a96b997d4b8411560da201a331ca7fe489abd834 Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.331624 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.359523 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7h46p" event={"ID":"433a8e08-2261-425b-97d1-2b61ad9ae5f9","Type":"ContainerStarted","Data":"8942b2a6a2e553ddd24e70526eaf93d7633f0ba91b0c6e7e9eb7877dfcbdfcde"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.366128 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" event={"ID":"49e6d834-fae4-4961-8866-9f49914bd21d","Type":"ContainerStarted","Data":"32beecc6e844392a1f1845fd14f7d6099dbcac1774ff504fb7fa6d6c56f23162"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.368870 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9s52" event={"ID":"fd24b5dd-8bba-467d-977a-cbd11c05e52b","Type":"ContainerStarted","Data":"50fef5cfdc27f02a5487549786d4db70803a9486577316146bccf2aab1a8f823"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.376208 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerStarted","Data":"32e17f7ee317c07a8c4b2696adddd3ee61ba47d2c3bd4ceb1b0d5f2891900d79"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.383656 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6fhqb" event={"ID":"9f87d7bb-973b-441c-9bb7-18a6e9532691","Type":"ContainerStarted","Data":"03c84037f6c8cc223e31635d10947504962b47e90f60ff4eb68e0c6c41040ca5"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.391609 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4dw9x" event={"ID":"95452ca6-e25a-44d1-a666-eb99c921ae7c","Type":"ContainerStarted","Data":"00229a74b3ddf43fadcb8f1adfeb39bb8e493eed73e4e20a63725dbcf10bfac6"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.392895 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6kgbc" event={"ID":"c27acb21-bfee-4e37-9f07-38bf334a5b5c","Type":"ContainerStarted","Data":"5cf63820ce0772d05fce1679a96b997d4b8411560da201a331ca7fe489abd834"} Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.768105 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zzsgk"] Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.823846 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:32 crc kubenswrapper[4900]: W1202 14:03:32.846893 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod725f3563_28dc_40f8_b01e_ecc75598997d.slice/crio-17c1cedd5d4bd074d4849b6e7cd66d0edbe806286f94a09c7cb350301fb92745 WatchSource:0}: Error finding container 17c1cedd5d4bd074d4849b6e7cd66d0edbe806286f94a09c7cb350301fb92745: Status 404 returned error can't find the container with id 17c1cedd5d4bd074d4849b6e7cd66d0edbe806286f94a09c7cb350301fb92745 Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.905530 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.926834 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:32 crc kubenswrapper[4900]: I1202 14:03:32.943293 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.403447 4900 generic.go:334] "Generic (PLEG): container finished" podID="49e6d834-fae4-4961-8866-9f49914bd21d" containerID="306ee8b72109ed7ebcb4ffd54b4e341e9537a3c31eb6014e2b3f365e68536ad0" exitCode=0 Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.403535 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" event={"ID":"49e6d834-fae4-4961-8866-9f49914bd21d","Type":"ContainerDied","Data":"306ee8b72109ed7ebcb4ffd54b4e341e9537a3c31eb6014e2b3f365e68536ad0"} Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.408399 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6fhqb" event={"ID":"9f87d7bb-973b-441c-9bb7-18a6e9532691","Type":"ContainerStarted","Data":"73133b2ba4989cd7f78a761541a674e62c5e36785b7fa274c71ca103d32fcf1c"} Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.416753 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5569b9cf-2f57-4aac-a343-95348500e0a3","Type":"ContainerStarted","Data":"efbfd5f5fe276b75f50fd01e2141853ac047a80c346e90a0dfd97038cfc1f5d0"} Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.418985 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6kgbc" event={"ID":"c27acb21-bfee-4e37-9f07-38bf334a5b5c","Type":"ContainerStarted","Data":"6801577ba120fc2235b74ff52e7d832dc6e19cb77eed7e40406ab29bbc2e5f28"} Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.445891 4900 generic.go:334] "Generic (PLEG): container finished" podID="725f3563-28dc-40f8-b01e-ecc75598997d" containerID="36175c4d5dbc391cf9b4bf3670fd6133585a737087ee3e99bd3a77f35d2369b2" exitCode=0 Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.445954 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" event={"ID":"725f3563-28dc-40f8-b01e-ecc75598997d","Type":"ContainerDied","Data":"36175c4d5dbc391cf9b4bf3670fd6133585a737087ee3e99bd3a77f35d2369b2"} Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.445980 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" event={"ID":"725f3563-28dc-40f8-b01e-ecc75598997d","Type":"ContainerStarted","Data":"17c1cedd5d4bd074d4849b6e7cd66d0edbe806286f94a09c7cb350301fb92745"} Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.450013 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6fhqb" podStartSLOduration=3.449975502 podStartE2EDuration="3.449975502s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:33.438109179 +0000 UTC m=+1258.853923030" watchObservedRunningTime="2025-12-02 14:03:33.449975502 +0000 UTC m=+1258.865789353" Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.473047 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6kgbc" podStartSLOduration=3.4730124079999998 podStartE2EDuration="3.473012408s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:33.462062081 +0000 UTC m=+1258.877875932" watchObservedRunningTime="2025-12-02 14:03:33.473012408 +0000 UTC m=+1258.888826259" Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.690017 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.803998 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.958435 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-svc\") pod \"49e6d834-fae4-4961-8866-9f49914bd21d\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.958500 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-nb\") pod \"49e6d834-fae4-4961-8866-9f49914bd21d\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.958600 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-swift-storage-0\") pod \"49e6d834-fae4-4961-8866-9f49914bd21d\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.958717 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-config\") pod \"49e6d834-fae4-4961-8866-9f49914bd21d\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.958761 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-sb\") pod \"49e6d834-fae4-4961-8866-9f49914bd21d\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.958816 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld9tf\" (UniqueName: \"kubernetes.io/projected/49e6d834-fae4-4961-8866-9f49914bd21d-kube-api-access-ld9tf\") pod \"49e6d834-fae4-4961-8866-9f49914bd21d\" (UID: \"49e6d834-fae4-4961-8866-9f49914bd21d\") " Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.966322 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e6d834-fae4-4961-8866-9f49914bd21d-kube-api-access-ld9tf" (OuterVolumeSpecName: "kube-api-access-ld9tf") pod "49e6d834-fae4-4961-8866-9f49914bd21d" (UID: "49e6d834-fae4-4961-8866-9f49914bd21d"). InnerVolumeSpecName "kube-api-access-ld9tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.984753 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49e6d834-fae4-4961-8866-9f49914bd21d" (UID: "49e6d834-fae4-4961-8866-9f49914bd21d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.987324 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49e6d834-fae4-4961-8866-9f49914bd21d" (UID: "49e6d834-fae4-4961-8866-9f49914bd21d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:33 crc kubenswrapper[4900]: I1202 14:03:33.988344 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49e6d834-fae4-4961-8866-9f49914bd21d" (UID: "49e6d834-fae4-4961-8866-9f49914bd21d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:33.997288 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-config" (OuterVolumeSpecName: "config") pod "49e6d834-fae4-4961-8866-9f49914bd21d" (UID: "49e6d834-fae4-4961-8866-9f49914bd21d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:33.997625 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49e6d834-fae4-4961-8866-9f49914bd21d" (UID: "49e6d834-fae4-4961-8866-9f49914bd21d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.060846 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.060876 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.060886 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.060897 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.060907 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld9tf\" (UniqueName: \"kubernetes.io/projected/49e6d834-fae4-4961-8866-9f49914bd21d-kube-api-access-ld9tf\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.060916 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e6d834-fae4-4961-8866-9f49914bd21d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.459328 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" event={"ID":"49e6d834-fae4-4961-8866-9f49914bd21d","Type":"ContainerDied","Data":"32beecc6e844392a1f1845fd14f7d6099dbcac1774ff504fb7fa6d6c56f23162"} Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.459389 4900 scope.go:117] "RemoveContainer" containerID="306ee8b72109ed7ebcb4ffd54b4e341e9537a3c31eb6014e2b3f365e68536ad0" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.459499 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-xzwhw" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.464380 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5569b9cf-2f57-4aac-a343-95348500e0a3","Type":"ContainerStarted","Data":"f4a283efb34313eba6fb5d9ab14b88916fc019b21b27f8feda8b2f8034f63e5d"} Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.468026 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2","Type":"ContainerStarted","Data":"58e9ad6b1c17ac67bd2903ac45cb0fda2dc6878bfd45ebbbcd1b7dd5fe9d56ab"} Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.471589 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" event={"ID":"725f3563-28dc-40f8-b01e-ecc75598997d","Type":"ContainerStarted","Data":"902d8e06a92c658dceedff2f53c9c8b32a333f448fa97c1a285650bfa11663a7"} Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.472066 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.502743 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" podStartSLOduration=3.502726477 podStartE2EDuration="3.502726477s" podCreationTimestamp="2025-12-02 14:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:34.488365524 +0000 UTC m=+1259.904179375" watchObservedRunningTime="2025-12-02 14:03:34.502726477 +0000 UTC m=+1259.918540328" Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.590360 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-xzwhw"] Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.603269 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-xzwhw"] Dec 02 14:03:34 crc kubenswrapper[4900]: I1202 14:03:34.931812 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e6d834-fae4-4961-8866-9f49914bd21d" path="/var/lib/kubelet/pods/49e6d834-fae4-4961-8866-9f49914bd21d/volumes" Dec 02 14:03:35 crc kubenswrapper[4900]: I1202 14:03:35.496077 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5569b9cf-2f57-4aac-a343-95348500e0a3","Type":"ContainerStarted","Data":"dc239b6f39b46e4c09f44263aad21f440c16eeef3c5d7eb435f4254df16c3807"} Dec 02 14:03:35 crc kubenswrapper[4900]: I1202 14:03:35.496252 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-httpd" containerID="cri-o://dc239b6f39b46e4c09f44263aad21f440c16eeef3c5d7eb435f4254df16c3807" gracePeriod=30 Dec 02 14:03:35 crc kubenswrapper[4900]: I1202 14:03:35.496266 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-log" containerID="cri-o://f4a283efb34313eba6fb5d9ab14b88916fc019b21b27f8feda8b2f8034f63e5d" gracePeriod=30 Dec 02 14:03:35 crc kubenswrapper[4900]: I1202 14:03:35.501224 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2","Type":"ContainerStarted","Data":"2a5161c9313e9aec96a86352f904f70a0bba944fb1df352c270ae3325f7f1049"} Dec 02 14:03:35 crc kubenswrapper[4900]: I1202 14:03:35.532174 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.532152807 podStartE2EDuration="5.532152807s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:35.51798001 +0000 UTC m=+1260.933793861" watchObservedRunningTime="2025-12-02 14:03:35.532152807 +0000 UTC m=+1260.947966658" Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.520422 4900 generic.go:334] "Generic (PLEG): container finished" podID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerID="dc239b6f39b46e4c09f44263aad21f440c16eeef3c5d7eb435f4254df16c3807" exitCode=0 Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.521157 4900 generic.go:334] "Generic (PLEG): container finished" podID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerID="f4a283efb34313eba6fb5d9ab14b88916fc019b21b27f8feda8b2f8034f63e5d" exitCode=143 Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.520508 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5569b9cf-2f57-4aac-a343-95348500e0a3","Type":"ContainerDied","Data":"dc239b6f39b46e4c09f44263aad21f440c16eeef3c5d7eb435f4254df16c3807"} Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.521234 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5569b9cf-2f57-4aac-a343-95348500e0a3","Type":"ContainerDied","Data":"f4a283efb34313eba6fb5d9ab14b88916fc019b21b27f8feda8b2f8034f63e5d"} Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.523735 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2","Type":"ContainerStarted","Data":"683472383f13a27531bd4fe9d7b7d612f6b504d05457b95d067c0ff85cd6196e"} Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.523935 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-httpd" containerID="cri-o://683472383f13a27531bd4fe9d7b7d612f6b504d05457b95d067c0ff85cd6196e" gracePeriod=30 Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.523885 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-log" containerID="cri-o://2a5161c9313e9aec96a86352f904f70a0bba944fb1df352c270ae3325f7f1049" gracePeriod=30 Dec 02 14:03:36 crc kubenswrapper[4900]: I1202 14:03:36.551264 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.551247968 podStartE2EDuration="6.551247968s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:36.546996039 +0000 UTC m=+1261.962809900" watchObservedRunningTime="2025-12-02 14:03:36.551247968 +0000 UTC m=+1261.967061829" Dec 02 14:03:37 crc kubenswrapper[4900]: I1202 14:03:37.536561 4900 generic.go:334] "Generic (PLEG): container finished" podID="c27acb21-bfee-4e37-9f07-38bf334a5b5c" containerID="6801577ba120fc2235b74ff52e7d832dc6e19cb77eed7e40406ab29bbc2e5f28" exitCode=0 Dec 02 14:03:37 crc kubenswrapper[4900]: I1202 14:03:37.536857 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6kgbc" event={"ID":"c27acb21-bfee-4e37-9f07-38bf334a5b5c","Type":"ContainerDied","Data":"6801577ba120fc2235b74ff52e7d832dc6e19cb77eed7e40406ab29bbc2e5f28"} Dec 02 14:03:37 crc kubenswrapper[4900]: I1202 14:03:37.539801 4900 generic.go:334] "Generic (PLEG): container finished" podID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerID="683472383f13a27531bd4fe9d7b7d612f6b504d05457b95d067c0ff85cd6196e" exitCode=0 Dec 02 14:03:37 crc kubenswrapper[4900]: I1202 14:03:37.539835 4900 generic.go:334] "Generic (PLEG): container finished" podID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerID="2a5161c9313e9aec96a86352f904f70a0bba944fb1df352c270ae3325f7f1049" exitCode=143 Dec 02 14:03:37 crc kubenswrapper[4900]: I1202 14:03:37.539845 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2","Type":"ContainerDied","Data":"683472383f13a27531bd4fe9d7b7d612f6b504d05457b95d067c0ff85cd6196e"} Dec 02 14:03:37 crc kubenswrapper[4900]: I1202 14:03:37.539892 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2","Type":"ContainerDied","Data":"2a5161c9313e9aec96a86352f904f70a0bba944fb1df352c270ae3325f7f1049"} Dec 02 14:03:41 crc kubenswrapper[4900]: I1202 14:03:41.487931 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:03:41 crc kubenswrapper[4900]: I1202 14:03:41.594658 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-97md2"] Dec 02 14:03:41 crc kubenswrapper[4900]: I1202 14:03:41.594944 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="dnsmasq-dns" containerID="cri-o://22a629147b9c6048f5eb1696402fbed4a1fe15570dae11f64ecaa9bdca18840b" gracePeriod=10 Dec 02 14:03:42 crc kubenswrapper[4900]: I1202 14:03:42.607632 4900 generic.go:334] "Generic (PLEG): container finished" podID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerID="22a629147b9c6048f5eb1696402fbed4a1fe15570dae11f64ecaa9bdca18840b" exitCode=0 Dec 02 14:03:42 crc kubenswrapper[4900]: I1202 14:03:42.607720 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" event={"ID":"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14","Type":"ContainerDied","Data":"22a629147b9c6048f5eb1696402fbed4a1fe15570dae11f64ecaa9bdca18840b"} Dec 02 14:03:44 crc kubenswrapper[4900]: I1202 14:03:44.930488 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:03:44 crc kubenswrapper[4900]: I1202 14:03:44.933606 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038214 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrprj\" (UniqueName: \"kubernetes.io/projected/5569b9cf-2f57-4aac-a343-95348500e0a3-kube-api-access-zrprj\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038313 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-public-tls-certs\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038371 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-combined-ca-bundle\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038402 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-httpd-run\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038420 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-combined-ca-bundle\") pod \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038438 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-scripts\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038465 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-config-data\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038507 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-logs\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038529 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgqgx\" (UniqueName: \"kubernetes.io/projected/c27acb21-bfee-4e37-9f07-38bf334a5b5c-kube-api-access-fgqgx\") pod \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038592 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-config-data\") pod \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038613 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5569b9cf-2f57-4aac-a343-95348500e0a3\" (UID: \"5569b9cf-2f57-4aac-a343-95348500e0a3\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038681 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-credential-keys\") pod \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038697 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-fernet-keys\") pod \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.038711 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-scripts\") pod \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\" (UID: \"c27acb21-bfee-4e37-9f07-38bf334a5b5c\") " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.045110 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c27acb21-bfee-4e37-9f07-38bf334a5b5c" (UID: "c27acb21-bfee-4e37-9f07-38bf334a5b5c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.045629 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5569b9cf-2f57-4aac-a343-95348500e0a3-kube-api-access-zrprj" (OuterVolumeSpecName: "kube-api-access-zrprj") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "kube-api-access-zrprj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.059955 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.060137 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-logs" (OuterVolumeSpecName: "logs") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.069889 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-scripts" (OuterVolumeSpecName: "scripts") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.072740 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27acb21-bfee-4e37-9f07-38bf334a5b5c-kube-api-access-fgqgx" (OuterVolumeSpecName: "kube-api-access-fgqgx") pod "c27acb21-bfee-4e37-9f07-38bf334a5b5c" (UID: "c27acb21-bfee-4e37-9f07-38bf334a5b5c"). InnerVolumeSpecName "kube-api-access-fgqgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.079885 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c27acb21-bfee-4e37-9f07-38bf334a5b5c" (UID: "c27acb21-bfee-4e37-9f07-38bf334a5b5c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.079886 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.107144 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-scripts" (OuterVolumeSpecName: "scripts") pod "c27acb21-bfee-4e37-9f07-38bf334a5b5c" (UID: "c27acb21-bfee-4e37-9f07-38bf334a5b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.128903 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c27acb21-bfee-4e37-9f07-38bf334a5b5c" (UID: "c27acb21-bfee-4e37-9f07-38bf334a5b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.158842 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-config-data" (OuterVolumeSpecName: "config-data") pod "c27acb21-bfee-4e37-9f07-38bf334a5b5c" (UID: "c27acb21-bfee-4e37-9f07-38bf334a5b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160281 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160316 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160326 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160334 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5569b9cf-2f57-4aac-a343-95348500e0a3-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160343 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgqgx\" (UniqueName: \"kubernetes.io/projected/c27acb21-bfee-4e37-9f07-38bf334a5b5c-kube-api-access-fgqgx\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160354 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160380 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160389 4900 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160398 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160406 4900 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27acb21-bfee-4e37-9f07-38bf334a5b5c-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.160414 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrprj\" (UniqueName: \"kubernetes.io/projected/5569b9cf-2f57-4aac-a343-95348500e0a3-kube-api-access-zrprj\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.203766 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.210079 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.226614 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.230557 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-config-data" (OuterVolumeSpecName: "config-data") pod "5569b9cf-2f57-4aac-a343-95348500e0a3" (UID: "5569b9cf-2f57-4aac-a343-95348500e0a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.261426 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.261460 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.261470 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.261479 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5569b9cf-2f57-4aac-a343-95348500e0a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.637763 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6kgbc" event={"ID":"c27acb21-bfee-4e37-9f07-38bf334a5b5c","Type":"ContainerDied","Data":"5cf63820ce0772d05fce1679a96b997d4b8411560da201a331ca7fe489abd834"} Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.637820 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cf63820ce0772d05fce1679a96b997d4b8411560da201a331ca7fe489abd834" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.637789 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6kgbc" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.640818 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5569b9cf-2f57-4aac-a343-95348500e0a3","Type":"ContainerDied","Data":"efbfd5f5fe276b75f50fd01e2141853ac047a80c346e90a0dfd97038cfc1f5d0"} Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.640870 4900 scope.go:117] "RemoveContainer" containerID="dc239b6f39b46e4c09f44263aad21f440c16eeef3c5d7eb435f4254df16c3807" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.640870 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.686908 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.698761 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.716973 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:45 crc kubenswrapper[4900]: E1202 14:03:45.722544 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e6d834-fae4-4961-8866-9f49914bd21d" containerName="init" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722582 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e6d834-fae4-4961-8866-9f49914bd21d" containerName="init" Dec 02 14:03:45 crc kubenswrapper[4900]: E1202 14:03:45.722620 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-log" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722630 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-log" Dec 02 14:03:45 crc kubenswrapper[4900]: E1202 14:03:45.722662 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-httpd" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722672 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-httpd" Dec 02 14:03:45 crc kubenswrapper[4900]: E1202 14:03:45.722686 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27acb21-bfee-4e37-9f07-38bf334a5b5c" containerName="keystone-bootstrap" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722695 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27acb21-bfee-4e37-9f07-38bf334a5b5c" containerName="keystone-bootstrap" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722894 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-httpd" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722916 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27acb21-bfee-4e37-9f07-38bf334a5b5c" containerName="keystone-bootstrap" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722934 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e6d834-fae4-4961-8866-9f49914bd21d" containerName="init" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.722960 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" containerName="glance-log" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.724894 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.727420 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.727659 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.731811 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873185 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873295 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873346 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873463 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873485 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873552 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-logs\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.873825 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzx8r\" (UniqueName: \"kubernetes.io/projected/c9986100-46f5-40b2-b20c-17e127f48575-kube-api-access-rzx8r\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.975824 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976282 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976361 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976391 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976411 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976451 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-logs\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976502 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzx8r\" (UniqueName: \"kubernetes.io/projected/c9986100-46f5-40b2-b20c-17e127f48575-kube-api-access-rzx8r\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.976562 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.979215 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.979928 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-logs\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.980187 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.982993 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.987541 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.989488 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.989954 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:45 crc kubenswrapper[4900]: I1202 14:03:45.999771 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzx8r\" (UniqueName: \"kubernetes.io/projected/c9986100-46f5-40b2-b20c-17e127f48575-kube-api-access-rzx8r\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.030137 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " pod="openstack/glance-default-external-api-0" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.049740 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6kgbc"] Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.056745 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6kgbc"] Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.071923 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.145358 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d9ld4"] Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.146586 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.148052 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.149106 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.149397 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.149603 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.149850 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8ncb" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.153405 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d9ld4"] Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.283070 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-fernet-keys\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.283183 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-credential-keys\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.283221 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jft7\" (UniqueName: \"kubernetes.io/projected/b5bef1d4-9515-4138-84b8-da85155c6f5a-kube-api-access-9jft7\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.284037 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-scripts\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.284107 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-config-data\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.284142 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-combined-ca-bundle\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.385540 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-fernet-keys\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.385614 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-credential-keys\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.385652 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jft7\" (UniqueName: \"kubernetes.io/projected/b5bef1d4-9515-4138-84b8-da85155c6f5a-kube-api-access-9jft7\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.385673 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-scripts\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.385695 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-config-data\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.385712 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-combined-ca-bundle\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.389061 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-fernet-keys\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.389309 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-scripts\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.389407 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-combined-ca-bundle\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.390889 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-credential-keys\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.404285 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-config-data\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.406355 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jft7\" (UniqueName: \"kubernetes.io/projected/b5bef1d4-9515-4138-84b8-da85155c6f5a-kube-api-access-9jft7\") pod \"keystone-bootstrap-d9ld4\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.465163 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.921897 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5569b9cf-2f57-4aac-a343-95348500e0a3" path="/var/lib/kubelet/pods/5569b9cf-2f57-4aac-a343-95348500e0a3/volumes" Dec 02 14:03:46 crc kubenswrapper[4900]: I1202 14:03:46.923436 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27acb21-bfee-4e37-9f07-38bf334a5b5c" path="/var/lib/kubelet/pods/c27acb21-bfee-4e37-9f07-38bf334a5b5c/volumes" Dec 02 14:03:50 crc kubenswrapper[4900]: I1202 14:03:50.979792 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Dec 02 14:03:53 crc kubenswrapper[4900]: E1202 14:03:53.390397 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 02 14:03:53 crc kubenswrapper[4900]: E1202 14:03:53.391179 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h544h65hc5h555hb5h96h564h8fh87h574h67bh665h5cdhdh9dh694h5bfh9h665h65ch6h5c8hc9h55chfbh57bh5f8h8ch56h575h96q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7v4r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7805c9b7-1be2-499f-b3c9-939245983c97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.519344 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.533112 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649406 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-scripts\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649497 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-httpd-run\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649541 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-config\") pod \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649577 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-svc\") pod \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649623 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-logs\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649688 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649712 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-internal-tls-certs\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649754 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-swift-storage-0\") pod \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649866 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k8br\" (UniqueName: \"kubernetes.io/projected/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-kube-api-access-7k8br\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.649933 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-sb\") pod \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.650003 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-combined-ca-bundle\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.650043 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-config-data\") pod \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\" (UID: \"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.650081 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-nb\") pod \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.650147 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-246v5\" (UniqueName: \"kubernetes.io/projected/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-kube-api-access-246v5\") pod \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\" (UID: \"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14\") " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.651980 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.655152 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-logs" (OuterVolumeSpecName: "logs") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.659147 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-kube-api-access-246v5" (OuterVolumeSpecName: "kube-api-access-246v5") pod "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" (UID: "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14"). InnerVolumeSpecName "kube-api-access-246v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.662154 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.664066 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-scripts" (OuterVolumeSpecName: "scripts") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.665528 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-kube-api-access-7k8br" (OuterVolumeSpecName: "kube-api-access-7k8br") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "kube-api-access-7k8br". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.693331 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.720381 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-config" (OuterVolumeSpecName: "config") pod "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" (UID: "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.720399 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" (UID: "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.722458 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" (UID: "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.723904 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" (UID: "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.727282 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.727330 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" (UID: "8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.745015 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-config-data" (OuterVolumeSpecName: "config-data") pod "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" (UID: "59f8bf03-9c2c-4793-a6ed-294bc91a2ee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753101 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k8br\" (UniqueName: \"kubernetes.io/projected/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-kube-api-access-7k8br\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753137 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753150 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753165 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753177 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753189 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-246v5\" (UniqueName: \"kubernetes.io/projected/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-kube-api-access-246v5\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753200 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753210 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753223 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753236 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753250 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753285 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753314 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753327 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753803 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" event={"ID":"8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14","Type":"ContainerDied","Data":"c245523cafc9a58338623bb6f0c0ab7108b3a44c2890901b8630fad99bea656c"} Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.753900 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.769327 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59f8bf03-9c2c-4793-a6ed-294bc91a2ee2","Type":"ContainerDied","Data":"58e9ad6b1c17ac67bd2903ac45cb0fda2dc6878bfd45ebbbcd1b7dd5fe9d56ab"} Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.769837 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.786854 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.855040 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.857587 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.863516 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.885872 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-97md2"] Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893163 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:53 crc kubenswrapper[4900]: E1202 14:03:53.893594 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="dnsmasq-dns" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893613 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="dnsmasq-dns" Dec 02 14:03:53 crc kubenswrapper[4900]: E1202 14:03:53.893631 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-httpd" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893637 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-httpd" Dec 02 14:03:53 crc kubenswrapper[4900]: E1202 14:03:53.893721 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-log" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893728 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-log" Dec 02 14:03:53 crc kubenswrapper[4900]: E1202 14:03:53.893743 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="init" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893749 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="init" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893941 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-httpd" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893958 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" containerName="glance-log" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.893983 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="dnsmasq-dns" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.894935 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.901958 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.902148 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.917235 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-97md2"] Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.936542 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956163 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956202 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6cbr\" (UniqueName: \"kubernetes.io/projected/25e4b3fb-1235-4d67-b70b-53760e92e6c5-kube-api-access-d6cbr\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956251 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956268 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956294 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956344 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956359 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:53 crc kubenswrapper[4900]: I1202 14:03:53.956417 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058175 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058221 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6cbr\" (UniqueName: \"kubernetes.io/projected/25e4b3fb-1235-4d67-b70b-53760e92e6c5-kube-api-access-d6cbr\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058309 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058336 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058381 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058429 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058452 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.058475 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.059535 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.060054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.060183 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.074843 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.075178 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.075324 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.077703 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.092264 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6cbr\" (UniqueName: \"kubernetes.io/projected/25e4b3fb-1235-4d67-b70b-53760e92e6c5-kube-api-access-d6cbr\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.146108 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.218544 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:03:54 crc kubenswrapper[4900]: E1202 14:03:54.936635 4900 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 02 14:03:54 crc kubenswrapper[4900]: E1202 14:03:54.937251 4900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n67sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b9s52_openstack(fd24b5dd-8bba-467d-977a-cbd11c05e52b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.937746 4900 scope.go:117] "RemoveContainer" containerID="f4a283efb34313eba6fb5d9ab14b88916fc019b21b27f8feda8b2f8034f63e5d" Dec 02 14:03:54 crc kubenswrapper[4900]: E1202 14:03:54.938818 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b9s52" podUID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.940686 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f8bf03-9c2c-4793-a6ed-294bc91a2ee2" path="/var/lib/kubelet/pods/59f8bf03-9c2c-4793-a6ed-294bc91a2ee2/volumes" Dec 02 14:03:54 crc kubenswrapper[4900]: I1202 14:03:54.942198 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" path="/var/lib/kubelet/pods/8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14/volumes" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.122007 4900 scope.go:117] "RemoveContainer" containerID="22a629147b9c6048f5eb1696402fbed4a1fe15570dae11f64ecaa9bdca18840b" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.166281 4900 scope.go:117] "RemoveContainer" containerID="f6f49c300a7a4ebceb502106fd1d9f7bd28e4e090c1d8f2e3a1ee796369a40a5" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.195830 4900 scope.go:117] "RemoveContainer" containerID="683472383f13a27531bd4fe9d7b7d612f6b504d05457b95d067c0ff85cd6196e" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.229715 4900 scope.go:117] "RemoveContainer" containerID="2a5161c9313e9aec96a86352f904f70a0bba944fb1df352c270ae3325f7f1049" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.533194 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d9ld4"] Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.676238 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.798175 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d9ld4" event={"ID":"b5bef1d4-9515-4138-84b8-da85155c6f5a","Type":"ContainerStarted","Data":"8087a9560c219873c90a9d53097b96c8fded5db305f1645c12fdc53e707047ef"} Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.798452 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d9ld4" event={"ID":"b5bef1d4-9515-4138-84b8-da85155c6f5a","Type":"ContainerStarted","Data":"0433aa253fe3e295fdf838cf43fb5fa2e03ee432cbdb6bf81364d59ad9ba74c5"} Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.802599 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7h46p" event={"ID":"433a8e08-2261-425b-97d1-2b61ad9ae5f9","Type":"ContainerStarted","Data":"aaa4c89b780f334d06fd958b796370ba5e407ef9eb4f3e2ec808d619b9abf8d4"} Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.804535 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4dw9x" event={"ID":"95452ca6-e25a-44d1-a666-eb99c921ae7c","Type":"ContainerStarted","Data":"a7e40440ff834859e3f71bef147e2660006177005143793d11dd69ed36a33d40"} Dec 02 14:03:55 crc kubenswrapper[4900]: E1202 14:03:55.817870 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b9s52" podUID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.825485 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d9ld4" podStartSLOduration=9.825457431 podStartE2EDuration="9.825457431s" podCreationTimestamp="2025-12-02 14:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:55.814565645 +0000 UTC m=+1281.230379496" watchObservedRunningTime="2025-12-02 14:03:55.825457431 +0000 UTC m=+1281.241271302" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.842941 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4dw9x" podStartSLOduration=3.251373123 podStartE2EDuration="25.84292154s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="2025-12-02 14:03:32.295802034 +0000 UTC m=+1257.711615885" lastFinishedPulling="2025-12-02 14:03:54.887350421 +0000 UTC m=+1280.303164302" observedRunningTime="2025-12-02 14:03:55.827119987 +0000 UTC m=+1281.242933838" watchObservedRunningTime="2025-12-02 14:03:55.84292154 +0000 UTC m=+1281.258735391" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.850902 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7h46p" podStartSLOduration=3.28829795 podStartE2EDuration="25.850885704s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="2025-12-02 14:03:32.330910929 +0000 UTC m=+1257.746724780" lastFinishedPulling="2025-12-02 14:03:54.893498643 +0000 UTC m=+1280.309312534" observedRunningTime="2025-12-02 14:03:55.848586939 +0000 UTC m=+1281.264400800" watchObservedRunningTime="2025-12-02 14:03:55.850885704 +0000 UTC m=+1281.266699555" Dec 02 14:03:55 crc kubenswrapper[4900]: I1202 14:03:55.981214 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-97md2" podUID="8e3cc5ef-a73e-4b5e-9dc0-8b2a30b14d14" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Dec 02 14:03:56 crc kubenswrapper[4900]: I1202 14:03:56.422293 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:03:56 crc kubenswrapper[4900]: I1202 14:03:56.838484 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25e4b3fb-1235-4d67-b70b-53760e92e6c5","Type":"ContainerStarted","Data":"6870737e0cf3ecdff8826fd07261bc7b3573137f0200b94fd8cdf5cf02530176"} Dec 02 14:03:56 crc kubenswrapper[4900]: I1202 14:03:56.846165 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9986100-46f5-40b2-b20c-17e127f48575","Type":"ContainerStarted","Data":"31afa1d90ffc3eb3b2811089930a7d4a5d800e2c7a80b1d24fb0bce435318111"} Dec 02 14:03:57 crc kubenswrapper[4900]: I1202 14:03:57.856477 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25e4b3fb-1235-4d67-b70b-53760e92e6c5","Type":"ContainerStarted","Data":"ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6"} Dec 02 14:03:57 crc kubenswrapper[4900]: I1202 14:03:57.858657 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9986100-46f5-40b2-b20c-17e127f48575","Type":"ContainerStarted","Data":"8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621"} Dec 02 14:03:57 crc kubenswrapper[4900]: I1202 14:03:57.860385 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerStarted","Data":"d84fb88baaeb29b7d31e12f8b8f180feec2a1d84dd4ce1bc960b0db2744d5068"} Dec 02 14:03:58 crc kubenswrapper[4900]: I1202 14:03:58.903883 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5bef1d4-9515-4138-84b8-da85155c6f5a" containerID="8087a9560c219873c90a9d53097b96c8fded5db305f1645c12fdc53e707047ef" exitCode=0 Dec 02 14:03:58 crc kubenswrapper[4900]: I1202 14:03:58.904263 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d9ld4" event={"ID":"b5bef1d4-9515-4138-84b8-da85155c6f5a","Type":"ContainerDied","Data":"8087a9560c219873c90a9d53097b96c8fded5db305f1645c12fdc53e707047ef"} Dec 02 14:03:58 crc kubenswrapper[4900]: I1202 14:03:58.907776 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25e4b3fb-1235-4d67-b70b-53760e92e6c5","Type":"ContainerStarted","Data":"2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5"} Dec 02 14:03:58 crc kubenswrapper[4900]: I1202 14:03:58.928075 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9986100-46f5-40b2-b20c-17e127f48575","Type":"ContainerStarted","Data":"561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf"} Dec 02 14:03:58 crc kubenswrapper[4900]: I1202 14:03:58.954224 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.954209376 podStartE2EDuration="5.954209376s" podCreationTimestamp="2025-12-02 14:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:58.953438724 +0000 UTC m=+1284.369252575" watchObservedRunningTime="2025-12-02 14:03:58.954209376 +0000 UTC m=+1284.370023227" Dec 02 14:03:58 crc kubenswrapper[4900]: I1202 14:03:58.981607 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.981585304 podStartE2EDuration="13.981585304s" podCreationTimestamp="2025-12-02 14:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:58.97502231 +0000 UTC m=+1284.390836181" watchObservedRunningTime="2025-12-02 14:03:58.981585304 +0000 UTC m=+1284.397399165" Dec 02 14:03:59 crc kubenswrapper[4900]: I1202 14:03:59.920069 4900 generic.go:334] "Generic (PLEG): container finished" podID="95452ca6-e25a-44d1-a666-eb99c921ae7c" containerID="a7e40440ff834859e3f71bef147e2660006177005143793d11dd69ed36a33d40" exitCode=0 Dec 02 14:03:59 crc kubenswrapper[4900]: I1202 14:03:59.920178 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4dw9x" event={"ID":"95452ca6-e25a-44d1-a666-eb99c921ae7c","Type":"ContainerDied","Data":"a7e40440ff834859e3f71bef147e2660006177005143793d11dd69ed36a33d40"} Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.316726 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.407586 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-config-data\") pod \"b5bef1d4-9515-4138-84b8-da85155c6f5a\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.407681 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-fernet-keys\") pod \"b5bef1d4-9515-4138-84b8-da85155c6f5a\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.407725 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-scripts\") pod \"b5bef1d4-9515-4138-84b8-da85155c6f5a\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.407746 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-combined-ca-bundle\") pod \"b5bef1d4-9515-4138-84b8-da85155c6f5a\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.407781 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jft7\" (UniqueName: \"kubernetes.io/projected/b5bef1d4-9515-4138-84b8-da85155c6f5a-kube-api-access-9jft7\") pod \"b5bef1d4-9515-4138-84b8-da85155c6f5a\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.407807 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-credential-keys\") pod \"b5bef1d4-9515-4138-84b8-da85155c6f5a\" (UID: \"b5bef1d4-9515-4138-84b8-da85155c6f5a\") " Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.413916 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5bef1d4-9515-4138-84b8-da85155c6f5a" (UID: "b5bef1d4-9515-4138-84b8-da85155c6f5a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.414057 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-scripts" (OuterVolumeSpecName: "scripts") pod "b5bef1d4-9515-4138-84b8-da85155c6f5a" (UID: "b5bef1d4-9515-4138-84b8-da85155c6f5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.415980 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5bef1d4-9515-4138-84b8-da85155c6f5a" (UID: "b5bef1d4-9515-4138-84b8-da85155c6f5a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.416967 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5bef1d4-9515-4138-84b8-da85155c6f5a-kube-api-access-9jft7" (OuterVolumeSpecName: "kube-api-access-9jft7") pod "b5bef1d4-9515-4138-84b8-da85155c6f5a" (UID: "b5bef1d4-9515-4138-84b8-da85155c6f5a"). InnerVolumeSpecName "kube-api-access-9jft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.436062 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-config-data" (OuterVolumeSpecName: "config-data") pod "b5bef1d4-9515-4138-84b8-da85155c6f5a" (UID: "b5bef1d4-9515-4138-84b8-da85155c6f5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.437163 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5bef1d4-9515-4138-84b8-da85155c6f5a" (UID: "b5bef1d4-9515-4138-84b8-da85155c6f5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.509721 4900 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.509948 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.510064 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.510172 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jft7\" (UniqueName: \"kubernetes.io/projected/b5bef1d4-9515-4138-84b8-da85155c6f5a-kube-api-access-9jft7\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.510282 4900 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.510420 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5bef1d4-9515-4138-84b8-da85155c6f5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.962500 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d9ld4" event={"ID":"b5bef1d4-9515-4138-84b8-da85155c6f5a","Type":"ContainerDied","Data":"0433aa253fe3e295fdf838cf43fb5fa2e03ee432cbdb6bf81364d59ad9ba74c5"} Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.962548 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0433aa253fe3e295fdf838cf43fb5fa2e03ee432cbdb6bf81364d59ad9ba74c5" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.962623 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d9ld4" Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.976820 4900 generic.go:334] "Generic (PLEG): container finished" podID="433a8e08-2261-425b-97d1-2b61ad9ae5f9" containerID="aaa4c89b780f334d06fd958b796370ba5e407ef9eb4f3e2ec808d619b9abf8d4" exitCode=0 Dec 02 14:04:00 crc kubenswrapper[4900]: I1202 14:04:00.976894 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7h46p" event={"ID":"433a8e08-2261-425b-97d1-2b61ad9ae5f9","Type":"ContainerDied","Data":"aaa4c89b780f334d06fd958b796370ba5e407ef9eb4f3e2ec808d619b9abf8d4"} Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.033153 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6595dffb96-c4mrx"] Dec 02 14:04:01 crc kubenswrapper[4900]: E1202 14:04:01.033558 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5bef1d4-9515-4138-84b8-da85155c6f5a" containerName="keystone-bootstrap" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.033577 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5bef1d4-9515-4138-84b8-da85155c6f5a" containerName="keystone-bootstrap" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.034123 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5bef1d4-9515-4138-84b8-da85155c6f5a" containerName="keystone-bootstrap" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.034820 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.041054 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.042268 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.042434 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.042567 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.042883 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8ncb" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.043076 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.054103 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6595dffb96-c4mrx"] Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.120696 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-internal-tls-certs\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.120769 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-combined-ca-bundle\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.120803 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-fernet-keys\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.120884 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-credential-keys\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.120941 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstck\" (UniqueName: \"kubernetes.io/projected/d42b962f-20f0-43d1-a1c4-c16c9392ec82-kube-api-access-cstck\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.120994 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-public-tls-certs\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.121017 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-scripts\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.121054 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-config-data\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.222772 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-combined-ca-bundle\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223104 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-fernet-keys\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223172 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-credential-keys\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223215 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstck\" (UniqueName: \"kubernetes.io/projected/d42b962f-20f0-43d1-a1c4-c16c9392ec82-kube-api-access-cstck\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223257 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-public-tls-certs\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223274 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-scripts\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223294 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-config-data\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.223339 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-internal-tls-certs\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.228392 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-scripts\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.228850 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-combined-ca-bundle\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.229102 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-fernet-keys\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.230026 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-credential-keys\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.230322 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-config-data\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.233088 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-public-tls-certs\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.241164 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstck\" (UniqueName: \"kubernetes.io/projected/d42b962f-20f0-43d1-a1c4-c16c9392ec82-kube-api-access-cstck\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.262574 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-internal-tls-certs\") pod \"keystone-6595dffb96-c4mrx\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:01 crc kubenswrapper[4900]: I1202 14:04:01.356854 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:04 crc kubenswrapper[4900]: I1202 14:04:04.219759 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:04 crc kubenswrapper[4900]: I1202 14:04:04.220078 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:04 crc kubenswrapper[4900]: I1202 14:04:04.251686 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:04 crc kubenswrapper[4900]: I1202 14:04:04.267292 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:05 crc kubenswrapper[4900]: I1202 14:04:05.014967 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:05 crc kubenswrapper[4900]: I1202 14:04:05.015353 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:06 crc kubenswrapper[4900]: I1202 14:04:06.072094 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:04:06 crc kubenswrapper[4900]: I1202 14:04:06.072141 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:04:06 crc kubenswrapper[4900]: I1202 14:04:06.102585 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:04:06 crc kubenswrapper[4900]: I1202 14:04:06.109696 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:04:07 crc kubenswrapper[4900]: I1202 14:04:07.006503 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:07 crc kubenswrapper[4900]: I1202 14:04:07.007536 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:04:07 crc kubenswrapper[4900]: I1202 14:04:07.035481 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:04:07 crc kubenswrapper[4900]: I1202 14:04:07.035545 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.029407 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.055791 4900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.264113 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7h46p" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.271725 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4dw9x" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.296515 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399254 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-combined-ca-bundle\") pod \"95452ca6-e25a-44d1-a666-eb99c921ae7c\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399293 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4nw8\" (UniqueName: \"kubernetes.io/projected/433a8e08-2261-425b-97d1-2b61ad9ae5f9-kube-api-access-g4nw8\") pod \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399320 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-config-data\") pod \"95452ca6-e25a-44d1-a666-eb99c921ae7c\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399375 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-db-sync-config-data\") pod \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399541 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66jj2\" (UniqueName: \"kubernetes.io/projected/95452ca6-e25a-44d1-a666-eb99c921ae7c-kube-api-access-66jj2\") pod \"95452ca6-e25a-44d1-a666-eb99c921ae7c\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399580 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-scripts\") pod \"95452ca6-e25a-44d1-a666-eb99c921ae7c\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399665 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-combined-ca-bundle\") pod \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\" (UID: \"433a8e08-2261-425b-97d1-2b61ad9ae5f9\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.399708 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95452ca6-e25a-44d1-a666-eb99c921ae7c-logs\") pod \"95452ca6-e25a-44d1-a666-eb99c921ae7c\" (UID: \"95452ca6-e25a-44d1-a666-eb99c921ae7c\") " Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.401376 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95452ca6-e25a-44d1-a666-eb99c921ae7c-logs" (OuterVolumeSpecName: "logs") pod "95452ca6-e25a-44d1-a666-eb99c921ae7c" (UID: "95452ca6-e25a-44d1-a666-eb99c921ae7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.416175 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-scripts" (OuterVolumeSpecName: "scripts") pod "95452ca6-e25a-44d1-a666-eb99c921ae7c" (UID: "95452ca6-e25a-44d1-a666-eb99c921ae7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.417179 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95452ca6-e25a-44d1-a666-eb99c921ae7c-kube-api-access-66jj2" (OuterVolumeSpecName: "kube-api-access-66jj2") pod "95452ca6-e25a-44d1-a666-eb99c921ae7c" (UID: "95452ca6-e25a-44d1-a666-eb99c921ae7c"). InnerVolumeSpecName "kube-api-access-66jj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.418332 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433a8e08-2261-425b-97d1-2b61ad9ae5f9-kube-api-access-g4nw8" (OuterVolumeSpecName: "kube-api-access-g4nw8") pod "433a8e08-2261-425b-97d1-2b61ad9ae5f9" (UID: "433a8e08-2261-425b-97d1-2b61ad9ae5f9"). InnerVolumeSpecName "kube-api-access-g4nw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.436653 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95452ca6-e25a-44d1-a666-eb99c921ae7c" (UID: "95452ca6-e25a-44d1-a666-eb99c921ae7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.447024 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "433a8e08-2261-425b-97d1-2b61ad9ae5f9" (UID: "433a8e08-2261-425b-97d1-2b61ad9ae5f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.449537 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-config-data" (OuterVolumeSpecName: "config-data") pod "95452ca6-e25a-44d1-a666-eb99c921ae7c" (UID: "95452ca6-e25a-44d1-a666-eb99c921ae7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.488035 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "433a8e08-2261-425b-97d1-2b61ad9ae5f9" (UID: "433a8e08-2261-425b-97d1-2b61ad9ae5f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520338 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66jj2\" (UniqueName: \"kubernetes.io/projected/95452ca6-e25a-44d1-a666-eb99c921ae7c-kube-api-access-66jj2\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520373 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520383 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520392 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95452ca6-e25a-44d1-a666-eb99c921ae7c-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520401 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520413 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4nw8\" (UniqueName: \"kubernetes.io/projected/433a8e08-2261-425b-97d1-2b61ad9ae5f9-kube-api-access-g4nw8\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520420 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95452ca6-e25a-44d1-a666-eb99c921ae7c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.520429 4900 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/433a8e08-2261-425b-97d1-2b61ad9ae5f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:09 crc kubenswrapper[4900]: I1202 14:04:09.542676 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6595dffb96-c4mrx"] Dec 02 14:04:09 crc kubenswrapper[4900]: W1202 14:04:09.577557 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42b962f_20f0_43d1_a1c4_c16c9392ec82.slice/crio-e99d2517fece5e3cd9d9868f45aa11e813c3ef98d6874f4e763b1a8d7eb21800 WatchSource:0}: Error finding container e99d2517fece5e3cd9d9868f45aa11e813c3ef98d6874f4e763b1a8d7eb21800: Status 404 returned error can't find the container with id e99d2517fece5e3cd9d9868f45aa11e813c3ef98d6874f4e763b1a8d7eb21800 Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.071837 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerStarted","Data":"d088728d94feb3e30f6afea7edbceb3456bcd3c06261216eb67a9d389901df1f"} Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.074773 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4dw9x" event={"ID":"95452ca6-e25a-44d1-a666-eb99c921ae7c","Type":"ContainerDied","Data":"00229a74b3ddf43fadcb8f1adfeb39bb8e493eed73e4e20a63725dbcf10bfac6"} Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.074818 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00229a74b3ddf43fadcb8f1adfeb39bb8e493eed73e4e20a63725dbcf10bfac6" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.075146 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4dw9x" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.078109 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7h46p" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.079279 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7h46p" event={"ID":"433a8e08-2261-425b-97d1-2b61ad9ae5f9","Type":"ContainerDied","Data":"8942b2a6a2e553ddd24e70526eaf93d7633f0ba91b0c6e7e9eb7877dfcbdfcde"} Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.079362 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8942b2a6a2e553ddd24e70526eaf93d7633f0ba91b0c6e7e9eb7877dfcbdfcde" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.087212 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6595dffb96-c4mrx" event={"ID":"d42b962f-20f0-43d1-a1c4-c16c9392ec82","Type":"ContainerStarted","Data":"e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd"} Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.087250 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6595dffb96-c4mrx" event={"ID":"d42b962f-20f0-43d1-a1c4-c16c9392ec82","Type":"ContainerStarted","Data":"e99d2517fece5e3cd9d9868f45aa11e813c3ef98d6874f4e763b1a8d7eb21800"} Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.088438 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.122371 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6595dffb96-c4mrx" podStartSLOduration=9.122352935 podStartE2EDuration="9.122352935s" podCreationTimestamp="2025-12-02 14:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:10.109030891 +0000 UTC m=+1295.524844742" watchObservedRunningTime="2025-12-02 14:04:10.122352935 +0000 UTC m=+1295.538166776" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.504516 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f6dffdfb8-h46pm"] Dec 02 14:04:10 crc kubenswrapper[4900]: E1202 14:04:10.505933 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433a8e08-2261-425b-97d1-2b61ad9ae5f9" containerName="barbican-db-sync" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.505953 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="433a8e08-2261-425b-97d1-2b61ad9ae5f9" containerName="barbican-db-sync" Dec 02 14:04:10 crc kubenswrapper[4900]: E1202 14:04:10.506013 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95452ca6-e25a-44d1-a666-eb99c921ae7c" containerName="placement-db-sync" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.506019 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="95452ca6-e25a-44d1-a666-eb99c921ae7c" containerName="placement-db-sync" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.506308 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="95452ca6-e25a-44d1-a666-eb99c921ae7c" containerName="placement-db-sync" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.506352 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="433a8e08-2261-425b-97d1-2b61ad9ae5f9" containerName="barbican-db-sync" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.508454 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.526309 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.533175 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2ps98" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.553485 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.553761 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.553934 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.598095 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f6dffdfb8-h46pm"] Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664596 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-scripts\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664684 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-config-data\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664744 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0e50c7-752e-4879-a382-ff97500cfd89-logs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664785 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgvs\" (UniqueName: \"kubernetes.io/projected/7b0e50c7-752e-4879-a382-ff97500cfd89-kube-api-access-jcgvs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664803 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-public-tls-certs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664826 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-internal-tls-certs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.664846 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-combined-ca-bundle\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.730729 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-ff8b8d959-29bd8"] Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.732180 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.742886 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c79b4474d-mx7p9"] Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.744362 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.750970 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.751258 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.751414 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dj9wt" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.758023 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.758215 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ff8b8d959-29bd8"] Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.766480 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgvs\" (UniqueName: \"kubernetes.io/projected/7b0e50c7-752e-4879-a382-ff97500cfd89-kube-api-access-jcgvs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.766553 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-public-tls-certs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.766602 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-internal-tls-certs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.768416 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c79b4474d-mx7p9"] Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.766635 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-combined-ca-bundle\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.778839 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-scripts\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.778883 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-config-data\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.779134 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0e50c7-752e-4879-a382-ff97500cfd89-logs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.780246 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0e50c7-752e-4879-a382-ff97500cfd89-logs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.783311 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-public-tls-certs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.787597 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-scripts\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.794496 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-config-data\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.801665 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-internal-tls-certs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.804241 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-combined-ca-bundle\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.842432 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgvs\" (UniqueName: \"kubernetes.io/projected/7b0e50c7-752e-4879-a382-ff97500cfd89-kube-api-access-jcgvs\") pod \"placement-6f6dffdfb8-h46pm\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.855775 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884608 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-combined-ca-bundle\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884704 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-combined-ca-bundle\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884753 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jc4z\" (UniqueName: \"kubernetes.io/projected/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-kube-api-access-2jc4z\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884776 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data-custom\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884790 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-logs\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884851 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884867 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data-custom\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884896 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-logs\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.884934 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpk2\" (UniqueName: \"kubernetes.io/projected/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-kube-api-access-ndpk2\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.906079 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7dnjp"] Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.908005 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.988816 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989228 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data-custom\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989262 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-logs\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989304 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpk2\" (UniqueName: \"kubernetes.io/projected/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-kube-api-access-ndpk2\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989339 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-combined-ca-bundle\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989379 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989401 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-combined-ca-bundle\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989427 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jc4z\" (UniqueName: \"kubernetes.io/projected/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-kube-api-access-2jc4z\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989447 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data-custom\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.989465 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-logs\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:10 crc kubenswrapper[4900]: I1202 14:04:10.990016 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-logs\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.003016 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-logs\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.003857 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.004373 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data-custom\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.007204 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-combined-ca-bundle\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.007274 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-combined-ca-bundle\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.013583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.014237 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data-custom\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.036096 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpk2\" (UniqueName: \"kubernetes.io/projected/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-kube-api-access-ndpk2\") pod \"barbican-worker-ff8b8d959-29bd8\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.036542 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jc4z\" (UniqueName: \"kubernetes.io/projected/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-kube-api-access-2jc4z\") pod \"barbican-keystone-listener-5c79b4474d-mx7p9\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.063872 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7dnjp"] Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.090551 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.090658 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.090754 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.090808 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.090849 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cfb\" (UniqueName: \"kubernetes.io/projected/066558b7-33bd-4154-b7bb-b98d6cfa139c-kube-api-access-p2cfb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.090891 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-config\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.134123 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9s52" event={"ID":"fd24b5dd-8bba-467d-977a-cbd11c05e52b","Type":"ContainerStarted","Data":"367b246fd004b35aecff3af708a39d75f5432904e0781ed90281b57adfd5a473"} Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.192435 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-config\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.192813 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.192863 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.192937 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.192975 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.193000 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cfb\" (UniqueName: \"kubernetes.io/projected/066558b7-33bd-4154-b7bb-b98d6cfa139c-kube-api-access-p2cfb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.194345 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-config\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.194929 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.195214 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.197434 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.197517 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.201450 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b9bb78f94-sjnpb"] Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.203071 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.211940 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.212113 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b9bb78f94-sjnpb"] Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.216768 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b9s52" podStartSLOduration=4.158693199 podStartE2EDuration="41.216750567s" podCreationTimestamp="2025-12-02 14:03:30 +0000 UTC" firstStartedPulling="2025-12-02 14:03:32.330276331 +0000 UTC m=+1257.746090182" lastFinishedPulling="2025-12-02 14:04:09.388333699 +0000 UTC m=+1294.804147550" observedRunningTime="2025-12-02 14:04:11.182821095 +0000 UTC m=+1296.598634946" watchObservedRunningTime="2025-12-02 14:04:11.216750567 +0000 UTC m=+1296.632564418" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.221160 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2cfb\" (UniqueName: \"kubernetes.io/projected/066558b7-33bd-4154-b7bb-b98d6cfa139c-kube-api-access-p2cfb\") pod \"dnsmasq-dns-7c67bffd47-7dnjp\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.257870 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.294862 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.295343 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.295575 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2fnc\" (UniqueName: \"kubernetes.io/projected/245e38af-1ae6-461f-8422-4ec8fed4f781-kube-api-access-d2fnc\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.295734 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-combined-ca-bundle\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.295916 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data-custom\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.295945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245e38af-1ae6-461f-8422-4ec8fed4f781-logs\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.332263 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.398089 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.398128 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2fnc\" (UniqueName: \"kubernetes.io/projected/245e38af-1ae6-461f-8422-4ec8fed4f781-kube-api-access-d2fnc\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.398195 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-combined-ca-bundle\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.398259 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data-custom\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.398280 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245e38af-1ae6-461f-8422-4ec8fed4f781-logs\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.398710 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245e38af-1ae6-461f-8422-4ec8fed4f781-logs\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.409247 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-combined-ca-bundle\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.409869 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data-custom\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.410243 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.416272 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2fnc\" (UniqueName: \"kubernetes.io/projected/245e38af-1ae6-461f-8422-4ec8fed4f781-kube-api-access-d2fnc\") pod \"barbican-api-b9bb78f94-sjnpb\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.544041 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f6dffdfb8-h46pm"] Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.572541 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.779429 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7dnjp"] Dec 02 14:04:11 crc kubenswrapper[4900]: W1202 14:04:11.812988 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod066558b7_33bd_4154_b7bb_b98d6cfa139c.slice/crio-0248c12d57d0c8cddac1f56a26aa5264e7a927e7c2fd86ca5fe5ab08e8d4db4c WatchSource:0}: Error finding container 0248c12d57d0c8cddac1f56a26aa5264e7a927e7c2fd86ca5fe5ab08e8d4db4c: Status 404 returned error can't find the container with id 0248c12d57d0c8cddac1f56a26aa5264e7a927e7c2fd86ca5fe5ab08e8d4db4c Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.877432 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c79b4474d-mx7p9"] Dec 02 14:04:11 crc kubenswrapper[4900]: I1202 14:04:11.888519 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ff8b8d959-29bd8"] Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.070806 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b9bb78f94-sjnpb"] Dec 02 14:04:12 crc kubenswrapper[4900]: W1202 14:04:12.085388 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod245e38af_1ae6_461f_8422_4ec8fed4f781.slice/crio-e4b127f5bd8662a9e18fa2dd24a0e94df2cb6d927f6a1192126e381bc3b0f4f9 WatchSource:0}: Error finding container e4b127f5bd8662a9e18fa2dd24a0e94df2cb6d927f6a1192126e381bc3b0f4f9: Status 404 returned error can't find the container with id e4b127f5bd8662a9e18fa2dd24a0e94df2cb6d927f6a1192126e381bc3b0f4f9 Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.149705 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" event={"ID":"241c5e6f-d993-4c7a-90a2-1ae1786dbea2","Type":"ContainerStarted","Data":"4039b0524567620b8df758af492d5c9757a0f4df02040848b968c2078da15855"} Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.152958 4900 generic.go:334] "Generic (PLEG): container finished" podID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerID="8622714b4af10f4679714dcef75b51b02fa214a81bae7f1181c185ad90cc0387" exitCode=0 Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.153163 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" event={"ID":"066558b7-33bd-4154-b7bb-b98d6cfa139c","Type":"ContainerDied","Data":"8622714b4af10f4679714dcef75b51b02fa214a81bae7f1181c185ad90cc0387"} Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.153589 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" event={"ID":"066558b7-33bd-4154-b7bb-b98d6cfa139c","Type":"ContainerStarted","Data":"0248c12d57d0c8cddac1f56a26aa5264e7a927e7c2fd86ca5fe5ab08e8d4db4c"} Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.157222 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f6dffdfb8-h46pm" event={"ID":"7b0e50c7-752e-4879-a382-ff97500cfd89","Type":"ContainerStarted","Data":"72b6e3300d0787fe99949f1eade7bb409bf6f76d9bb245ec44e8976a1315be81"} Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.157314 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f6dffdfb8-h46pm" event={"ID":"7b0e50c7-752e-4879-a382-ff97500cfd89","Type":"ContainerStarted","Data":"7d949b736f7e4ed227f9fc07962b37278f3ae5880a6f7739e8978c75cabf68c8"} Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.159354 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9bb78f94-sjnpb" event={"ID":"245e38af-1ae6-461f-8422-4ec8fed4f781","Type":"ContainerStarted","Data":"e4b127f5bd8662a9e18fa2dd24a0e94df2cb6d927f6a1192126e381bc3b0f4f9"} Dec 02 14:04:12 crc kubenswrapper[4900]: I1202 14:04:12.161764 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ff8b8d959-29bd8" event={"ID":"cb2b5602-0b26-4de1-ac2c-3606bd0aede3","Type":"ContainerStarted","Data":"fd2ffbe1c70f7a3ce44f2e575210fbcde3ab26cf8376a0f31267627b2e95894c"} Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.171842 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" event={"ID":"066558b7-33bd-4154-b7bb-b98d6cfa139c","Type":"ContainerStarted","Data":"8ef8394ade07a5f166cf2d4cc7e5c2c73d32bd419edf16d32c29571935cddf7d"} Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.172436 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.173780 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f6dffdfb8-h46pm" event={"ID":"7b0e50c7-752e-4879-a382-ff97500cfd89","Type":"ContainerStarted","Data":"6554b3a343d89d6f8889d9cc9f50c9bd71066e708a1b0d388e5faa67db8d54dc"} Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.173887 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.173910 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.176521 4900 generic.go:334] "Generic (PLEG): container finished" podID="9f87d7bb-973b-441c-9bb7-18a6e9532691" containerID="73133b2ba4989cd7f78a761541a674e62c5e36785b7fa274c71ca103d32fcf1c" exitCode=0 Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.176573 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6fhqb" event={"ID":"9f87d7bb-973b-441c-9bb7-18a6e9532691","Type":"ContainerDied","Data":"73133b2ba4989cd7f78a761541a674e62c5e36785b7fa274c71ca103d32fcf1c"} Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.178723 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9bb78f94-sjnpb" event={"ID":"245e38af-1ae6-461f-8422-4ec8fed4f781","Type":"ContainerStarted","Data":"8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97"} Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.178749 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9bb78f94-sjnpb" event={"ID":"245e38af-1ae6-461f-8422-4ec8fed4f781","Type":"ContainerStarted","Data":"d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e"} Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.179148 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.179176 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.193256 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" podStartSLOduration=3.193237134 podStartE2EDuration="3.193237134s" podCreationTimestamp="2025-12-02 14:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:13.186135204 +0000 UTC m=+1298.601949055" watchObservedRunningTime="2025-12-02 14:04:13.193237134 +0000 UTC m=+1298.609050985" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.209528 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f6dffdfb8-h46pm" podStartSLOduration=3.209511822 podStartE2EDuration="3.209511822s" podCreationTimestamp="2025-12-02 14:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:13.205127049 +0000 UTC m=+1298.620940910" watchObservedRunningTime="2025-12-02 14:04:13.209511822 +0000 UTC m=+1298.625325663" Dec 02 14:04:13 crc kubenswrapper[4900]: I1202 14:04:13.231231 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b9bb78f94-sjnpb" podStartSLOduration=2.231213184 podStartE2EDuration="2.231213184s" podCreationTimestamp="2025-12-02 14:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:13.222388595 +0000 UTC m=+1298.638202446" watchObservedRunningTime="2025-12-02 14:04:13.231213184 +0000 UTC m=+1298.647027035" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.191032 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ff8b8d959-29bd8" event={"ID":"cb2b5602-0b26-4de1-ac2c-3606bd0aede3","Type":"ContainerStarted","Data":"58ddf684d77381c4c34646d9e1659713b257f24bcc73342818f13e7ed7d7268f"} Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.361831 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79db7cb55d-4cs7x"] Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.363742 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.365285 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.366203 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.372315 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79db7cb55d-4cs7x"] Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461262 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-combined-ca-bundle\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461377 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzrn\" (UniqueName: \"kubernetes.io/projected/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-kube-api-access-sgzrn\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461456 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-public-tls-certs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461593 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data-custom\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461809 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461836 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-internal-tls-certs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.461859 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-logs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563678 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-combined-ca-bundle\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563717 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzrn\" (UniqueName: \"kubernetes.io/projected/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-kube-api-access-sgzrn\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563776 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-public-tls-certs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563812 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data-custom\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563858 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563874 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-internal-tls-certs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.563912 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-logs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.564365 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-logs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.576498 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-public-tls-certs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.577283 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-combined-ca-bundle\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.578244 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.579463 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-internal-tls-certs\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.596498 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzrn\" (UniqueName: \"kubernetes.io/projected/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-kube-api-access-sgzrn\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.601265 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data-custom\") pod \"barbican-api-79db7cb55d-4cs7x\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:14 crc kubenswrapper[4900]: I1202 14:04:14.684968 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.116556 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.117020 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.204218 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ff8b8d959-29bd8" event={"ID":"cb2b5602-0b26-4de1-ac2c-3606bd0aede3","Type":"ContainerStarted","Data":"e987a3eb684f39a5280336cf0f24e6d52b9537943b4ae3c78e42d02eaacbba03"} Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.206159 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6fhqb" event={"ID":"9f87d7bb-973b-441c-9bb7-18a6e9532691","Type":"ContainerDied","Data":"03c84037f6c8cc223e31635d10947504962b47e90f60ff4eb68e0c6c41040ca5"} Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.206192 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c84037f6c8cc223e31635d10947504962b47e90f60ff4eb68e0c6c41040ca5" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.257811 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-ff8b8d959-29bd8" podStartSLOduration=3.811241524 podStartE2EDuration="5.257785074s" podCreationTimestamp="2025-12-02 14:04:10 +0000 UTC" firstStartedPulling="2025-12-02 14:04:11.908133964 +0000 UTC m=+1297.323947815" lastFinishedPulling="2025-12-02 14:04:13.354677504 +0000 UTC m=+1298.770491365" observedRunningTime="2025-12-02 14:04:15.220344899 +0000 UTC m=+1300.636158750" watchObservedRunningTime="2025-12-02 14:04:15.257785074 +0000 UTC m=+1300.673598925" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.308334 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.385411 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkx4v\" (UniqueName: \"kubernetes.io/projected/9f87d7bb-973b-441c-9bb7-18a6e9532691-kube-api-access-fkx4v\") pod \"9f87d7bb-973b-441c-9bb7-18a6e9532691\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.385464 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-combined-ca-bundle\") pod \"9f87d7bb-973b-441c-9bb7-18a6e9532691\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.385607 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-config\") pod \"9f87d7bb-973b-441c-9bb7-18a6e9532691\" (UID: \"9f87d7bb-973b-441c-9bb7-18a6e9532691\") " Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.395982 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f87d7bb-973b-441c-9bb7-18a6e9532691-kube-api-access-fkx4v" (OuterVolumeSpecName: "kube-api-access-fkx4v") pod "9f87d7bb-973b-441c-9bb7-18a6e9532691" (UID: "9f87d7bb-973b-441c-9bb7-18a6e9532691"). InnerVolumeSpecName "kube-api-access-fkx4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.426800 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-config" (OuterVolumeSpecName: "config") pod "9f87d7bb-973b-441c-9bb7-18a6e9532691" (UID: "9f87d7bb-973b-441c-9bb7-18a6e9532691"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.434326 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f87d7bb-973b-441c-9bb7-18a6e9532691" (UID: "9f87d7bb-973b-441c-9bb7-18a6e9532691"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.487994 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.488050 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkx4v\" (UniqueName: \"kubernetes.io/projected/9f87d7bb-973b-441c-9bb7-18a6e9532691-kube-api-access-fkx4v\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:15 crc kubenswrapper[4900]: I1202 14:04:15.488071 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f87d7bb-973b-441c-9bb7-18a6e9532691-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.223854 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6fhqb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.568780 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7dnjp"] Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.569058 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerName="dnsmasq-dns" containerID="cri-o://8ef8394ade07a5f166cf2d4cc7e5c2c73d32bd419edf16d32c29571935cddf7d" gracePeriod=10 Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.586695 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wd5rb"] Dec 02 14:04:16 crc kubenswrapper[4900]: E1202 14:04:16.587040 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f87d7bb-973b-441c-9bb7-18a6e9532691" containerName="neutron-db-sync" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.587052 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f87d7bb-973b-441c-9bb7-18a6e9532691" containerName="neutron-db-sync" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.587234 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f87d7bb-973b-441c-9bb7-18a6e9532691" containerName="neutron-db-sync" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.606963 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.624884 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wd5rb"] Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.641396 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6546f8c848-792t5"] Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.643137 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.652323 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-g5mcs" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.652760 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.652896 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.652994 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.682814 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6546f8c848-792t5"] Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.717633 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-httpd-config\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.717979 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-config\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718000 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt57k\" (UniqueName: \"kubernetes.io/projected/0336fea5-1cff-472b-b5a6-37adaf489c63-kube-api-access-dt57k\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718055 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718070 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gdtz\" (UniqueName: \"kubernetes.io/projected/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-kube-api-access-9gdtz\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718115 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-config\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718149 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718286 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-ovndb-tls-certs\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718312 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718331 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.718372 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-combined-ca-bundle\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820318 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-ovndb-tls-certs\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820366 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820389 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820422 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-combined-ca-bundle\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820486 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-httpd-config\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820507 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-config\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820524 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt57k\" (UniqueName: \"kubernetes.io/projected/0336fea5-1cff-472b-b5a6-37adaf489c63-kube-api-access-dt57k\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820573 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gdtz\" (UniqueName: \"kubernetes.io/projected/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-kube-api-access-9gdtz\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820588 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820744 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-config\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.820790 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.821422 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.821636 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.821666 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-config\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.821860 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.822157 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.825941 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-config\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.826583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-httpd-config\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.827065 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-ovndb-tls-certs\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.827283 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-combined-ca-bundle\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.840728 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gdtz\" (UniqueName: \"kubernetes.io/projected/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-kube-api-access-9gdtz\") pod \"neutron-6546f8c848-792t5\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.840883 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt57k\" (UniqueName: \"kubernetes.io/projected/0336fea5-1cff-472b-b5a6-37adaf489c63-kube-api-access-dt57k\") pod \"dnsmasq-dns-848cf88cfc-wd5rb\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.944289 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:16 crc kubenswrapper[4900]: I1202 14:04:16.973931 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:17 crc kubenswrapper[4900]: I1202 14:04:17.243504 4900 generic.go:334] "Generic (PLEG): container finished" podID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerID="8ef8394ade07a5f166cf2d4cc7e5c2c73d32bd419edf16d32c29571935cddf7d" exitCode=0 Dec 02 14:04:17 crc kubenswrapper[4900]: I1202 14:04:17.243553 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" event={"ID":"066558b7-33bd-4154-b7bb-b98d6cfa139c","Type":"ContainerDied","Data":"8ef8394ade07a5f166cf2d4cc7e5c2c73d32bd419edf16d32c29571935cddf7d"} Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.898699 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d9bd66cf-nlpm2"] Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.900752 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.903078 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.904026 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.921867 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d9bd66cf-nlpm2"] Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.970977 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-config\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.971033 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-combined-ca-bundle\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.971091 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-public-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.971140 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgxv\" (UniqueName: \"kubernetes.io/projected/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-kube-api-access-wlgxv\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.971454 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-ovndb-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.971556 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-internal-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:18 crc kubenswrapper[4900]: I1202 14:04:18.971660 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-httpd-config\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073735 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-ovndb-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073787 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-internal-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073823 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-httpd-config\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073851 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-config\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073871 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-combined-ca-bundle\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073918 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-public-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.073957 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgxv\" (UniqueName: \"kubernetes.io/projected/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-kube-api-access-wlgxv\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.080860 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-ovndb-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.081828 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-internal-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.082442 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-config\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.083264 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-combined-ca-bundle\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.083801 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-httpd-config\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.083950 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-public-tls-certs\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.089300 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgxv\" (UniqueName: \"kubernetes.io/projected/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-kube-api-access-wlgxv\") pod \"neutron-5d9bd66cf-nlpm2\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.266736 4900 generic.go:334] "Generic (PLEG): container finished" podID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" containerID="367b246fd004b35aecff3af708a39d75f5432904e0781ed90281b57adfd5a473" exitCode=0 Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.266779 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9s52" event={"ID":"fd24b5dd-8bba-467d-977a-cbd11c05e52b","Type":"ContainerDied","Data":"367b246fd004b35aecff3af708a39d75f5432904e0781ed90281b57adfd5a473"} Dec 02 14:04:19 crc kubenswrapper[4900]: I1202 14:04:19.274508 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.311488 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" event={"ID":"066558b7-33bd-4154-b7bb-b98d6cfa139c","Type":"ContainerDied","Data":"0248c12d57d0c8cddac1f56a26aa5264e7a927e7c2fd86ca5fe5ab08e8d4db4c"} Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.311829 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0248c12d57d0c8cddac1f56a26aa5264e7a927e7c2fd86ca5fe5ab08e8d4db4c" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.313420 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.315549 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9s52" event={"ID":"fd24b5dd-8bba-467d-977a-cbd11c05e52b","Type":"ContainerDied","Data":"50fef5cfdc27f02a5487549786d4db70803a9486577316146bccf2aab1a8f823"} Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.315583 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fef5cfdc27f02a5487549786d4db70803a9486577316146bccf2aab1a8f823" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.316656 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9s52" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428069 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-db-sync-config-data\") pod \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428367 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-config-data\") pod \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428432 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2cfb\" (UniqueName: \"kubernetes.io/projected/066558b7-33bd-4154-b7bb-b98d6cfa139c-kube-api-access-p2cfb\") pod \"066558b7-33bd-4154-b7bb-b98d6cfa139c\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428471 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd24b5dd-8bba-467d-977a-cbd11c05e52b-etc-machine-id\") pod \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428488 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n67sg\" (UniqueName: \"kubernetes.io/projected/fd24b5dd-8bba-467d-977a-cbd11c05e52b-kube-api-access-n67sg\") pod \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428524 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-swift-storage-0\") pod \"066558b7-33bd-4154-b7bb-b98d6cfa139c\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428569 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-nb\") pod \"066558b7-33bd-4154-b7bb-b98d6cfa139c\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428595 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-svc\") pod \"066558b7-33bd-4154-b7bb-b98d6cfa139c\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428619 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-sb\") pod \"066558b7-33bd-4154-b7bb-b98d6cfa139c\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428665 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-scripts\") pod \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428696 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-config\") pod \"066558b7-33bd-4154-b7bb-b98d6cfa139c\" (UID: \"066558b7-33bd-4154-b7bb-b98d6cfa139c\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.428729 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-combined-ca-bundle\") pod \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\" (UID: \"fd24b5dd-8bba-467d-977a-cbd11c05e52b\") " Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.442113 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd24b5dd-8bba-467d-977a-cbd11c05e52b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fd24b5dd-8bba-467d-977a-cbd11c05e52b" (UID: "fd24b5dd-8bba-467d-977a-cbd11c05e52b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.443369 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fd24b5dd-8bba-467d-977a-cbd11c05e52b" (UID: "fd24b5dd-8bba-467d-977a-cbd11c05e52b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.481372 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-scripts" (OuterVolumeSpecName: "scripts") pod "fd24b5dd-8bba-467d-977a-cbd11c05e52b" (UID: "fd24b5dd-8bba-467d-977a-cbd11c05e52b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.481696 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066558b7-33bd-4154-b7bb-b98d6cfa139c-kube-api-access-p2cfb" (OuterVolumeSpecName: "kube-api-access-p2cfb") pod "066558b7-33bd-4154-b7bb-b98d6cfa139c" (UID: "066558b7-33bd-4154-b7bb-b98d6cfa139c"). InnerVolumeSpecName "kube-api-access-p2cfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.481801 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd24b5dd-8bba-467d-977a-cbd11c05e52b-kube-api-access-n67sg" (OuterVolumeSpecName: "kube-api-access-n67sg") pod "fd24b5dd-8bba-467d-977a-cbd11c05e52b" (UID: "fd24b5dd-8bba-467d-977a-cbd11c05e52b"). InnerVolumeSpecName "kube-api-access-n67sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.535028 4900 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.535069 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2cfb\" (UniqueName: \"kubernetes.io/projected/066558b7-33bd-4154-b7bb-b98d6cfa139c-kube-api-access-p2cfb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.535080 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd24b5dd-8bba-467d-977a-cbd11c05e52b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.535089 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n67sg\" (UniqueName: \"kubernetes.io/projected/fd24b5dd-8bba-467d-977a-cbd11c05e52b-kube-api-access-n67sg\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.535097 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.548510 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd24b5dd-8bba-467d-977a-cbd11c05e52b" (UID: "fd24b5dd-8bba-467d-977a-cbd11c05e52b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.568273 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-config-data" (OuterVolumeSpecName: "config-data") pod "fd24b5dd-8bba-467d-977a-cbd11c05e52b" (UID: "fd24b5dd-8bba-467d-977a-cbd11c05e52b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.577150 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "066558b7-33bd-4154-b7bb-b98d6cfa139c" (UID: "066558b7-33bd-4154-b7bb-b98d6cfa139c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.578987 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "066558b7-33bd-4154-b7bb-b98d6cfa139c" (UID: "066558b7-33bd-4154-b7bb-b98d6cfa139c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.636561 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.636947 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.636961 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd24b5dd-8bba-467d-977a-cbd11c05e52b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.636971 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.650033 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "066558b7-33bd-4154-b7bb-b98d6cfa139c" (UID: "066558b7-33bd-4154-b7bb-b98d6cfa139c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.722134 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "066558b7-33bd-4154-b7bb-b98d6cfa139c" (UID: "066558b7-33bd-4154-b7bb-b98d6cfa139c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.732121 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-config" (OuterVolumeSpecName: "config") pod "066558b7-33bd-4154-b7bb-b98d6cfa139c" (UID: "066558b7-33bd-4154-b7bb-b98d6cfa139c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.741730 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.741763 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.741773 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066558b7-33bd-4154-b7bb-b98d6cfa139c-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.858120 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79db7cb55d-4cs7x"] Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.864267 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wd5rb"] Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.918598 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6546f8c848-792t5"] Dec 02 14:04:21 crc kubenswrapper[4900]: E1202 14:04:21.968051 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" Dec 02 14:04:21 crc kubenswrapper[4900]: I1202 14:04:21.996123 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d9bd66cf-nlpm2"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.328672 4900 generic.go:334] "Generic (PLEG): container finished" podID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerID="ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1" exitCode=0 Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.328774 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" event={"ID":"0336fea5-1cff-472b-b5a6-37adaf489c63","Type":"ContainerDied","Data":"ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.329096 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" event={"ID":"0336fea5-1cff-472b-b5a6-37adaf489c63","Type":"ContainerStarted","Data":"2f7d62cedf05fa79d040bea72b3a81c2ce95aeadbba8fcc8fe668d9a0deb20ef"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.331014 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d9bd66cf-nlpm2" event={"ID":"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5","Type":"ContainerStarted","Data":"9238dcf7f7f82b6dc7a50ba1a116dc4de238023a2ef150a8893f84fb27dbe133"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.335024 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerStarted","Data":"17de38699e85eb4b4349d299d77aec5a855cca7af92bd3c9b055200056256850"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.335213 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="ceilometer-notification-agent" containerID="cri-o://d84fb88baaeb29b7d31e12f8b8f180feec2a1d84dd4ce1bc960b0db2744d5068" gracePeriod=30 Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.335272 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.335315 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="proxy-httpd" containerID="cri-o://17de38699e85eb4b4349d299d77aec5a855cca7af92bd3c9b055200056256850" gracePeriod=30 Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.335340 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="sg-core" containerID="cri-o://d088728d94feb3e30f6afea7edbceb3456bcd3c06261216eb67a9d389901df1f" gracePeriod=30 Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.346273 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" event={"ID":"241c5e6f-d993-4c7a-90a2-1ae1786dbea2","Type":"ContainerStarted","Data":"9b9a1adc9bf6bc055b6124dbfb0cf0940f73cceff1fb98ba82f90ebb4fa7c9e3"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.353188 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6546f8c848-792t5" event={"ID":"41d88fa5-0aa7-4ab7-8089-e073efd31ff0","Type":"ContainerStarted","Data":"99e006ed89c5e5fa44bcadfaecbda91fa480e0524e0de1fae422e487c980d8d1"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.353229 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6546f8c848-792t5" event={"ID":"41d88fa5-0aa7-4ab7-8089-e073efd31ff0","Type":"ContainerStarted","Data":"4e1c615b9faf554247f1eab9b7f04d0d40974e476b61ed32c70cb0feec65263a"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.369206 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7dnjp" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.373859 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79db7cb55d-4cs7x" event={"ID":"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571","Type":"ContainerStarted","Data":"4735b662992f5922ce1cad03f6fa4def9946bffac92b4623b987fa581665c1db"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.373918 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79db7cb55d-4cs7x" event={"ID":"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571","Type":"ContainerStarted","Data":"1e1869fc88c75d6f11a89a6d01f794c6ff750d4790f3a4705e89b8d389ca2081"} Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.373988 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9s52" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.493202 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7dnjp"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.500949 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7dnjp"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.684183 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:22 crc kubenswrapper[4900]: E1202 14:04:22.685014 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerName="init" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.685031 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerName="init" Dec 02 14:04:22 crc kubenswrapper[4900]: E1202 14:04:22.685051 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerName="dnsmasq-dns" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.685058 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerName="dnsmasq-dns" Dec 02 14:04:22 crc kubenswrapper[4900]: E1202 14:04:22.685076 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" containerName="cinder-db-sync" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.685082 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" containerName="cinder-db-sync" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.685288 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" containerName="cinder-db-sync" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.685321 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" containerName="dnsmasq-dns" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.686358 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.693444 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-psqf2" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.693689 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.693811 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.693969 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.708313 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.770752 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wd5rb"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.775912 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a0ec673-a3a9-4554-926e-beadcc2fab09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.775956 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.776015 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.776080 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dkj\" (UniqueName: \"kubernetes.io/projected/7a0ec673-a3a9-4554-926e-beadcc2fab09-kube-api-access-n8dkj\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.776111 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.776165 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.820265 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-bhwdt"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.822861 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.836721 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-bhwdt"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877686 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877744 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a0ec673-a3a9-4554-926e-beadcc2fab09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877764 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877812 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877846 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877875 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877912 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8dkj\" (UniqueName: \"kubernetes.io/projected/7a0ec673-a3a9-4554-926e-beadcc2fab09-kube-api-access-n8dkj\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877944 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877972 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-config\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.878009 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.878051 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.878072 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqnl\" (UniqueName: \"kubernetes.io/projected/a352e2a1-219a-4b55-9d8f-d4607dd3890c-kube-api-access-8qqnl\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.877841 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a0ec673-a3a9-4554-926e-beadcc2fab09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.881301 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-scripts\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.883834 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.885462 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.908455 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8dkj\" (UniqueName: \"kubernetes.io/projected/7a0ec673-a3a9-4554-926e-beadcc2fab09-kube-api-access-n8dkj\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.908703 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.940046 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066558b7-33bd-4154-b7bb-b98d6cfa139c" path="/var/lib/kubelet/pods/066558b7-33bd-4154-b7bb-b98d6cfa139c/volumes" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.949704 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.951353 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.962001 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.968130 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.979994 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.980037 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqnl\" (UniqueName: \"kubernetes.io/projected/a352e2a1-219a-4b55-9d8f-d4607dd3890c-kube-api-access-8qqnl\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.980095 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.980123 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.980160 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.980210 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-config\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.980998 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-config\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.981344 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.981408 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.981816 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:22 crc kubenswrapper[4900]: I1202 14:04:22.994484 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.019381 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqnl\" (UniqueName: \"kubernetes.io/projected/a352e2a1-219a-4b55-9d8f-d4607dd3890c-kube-api-access-8qqnl\") pod \"dnsmasq-dns-6578955fd5-bhwdt\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.039274 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.083914 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd50edca-16dd-4aba-9b69-baa808ce0d7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.084046 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.084079 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczss\" (UniqueName: \"kubernetes.io/projected/fd50edca-16dd-4aba-9b69-baa808ce0d7e-kube-api-access-tczss\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.084127 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.084191 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.084607 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-scripts\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.084676 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd50edca-16dd-4aba-9b69-baa808ce0d7e-logs\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.163861 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197086 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197144 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-scripts\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197162 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd50edca-16dd-4aba-9b69-baa808ce0d7e-logs\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197292 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd50edca-16dd-4aba-9b69-baa808ce0d7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197425 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197466 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tczss\" (UniqueName: \"kubernetes.io/projected/fd50edca-16dd-4aba-9b69-baa808ce0d7e-kube-api-access-tczss\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.197532 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.200399 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd50edca-16dd-4aba-9b69-baa808ce0d7e-logs\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.202177 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd50edca-16dd-4aba-9b69-baa808ce0d7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.206818 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.217475 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-scripts\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.223922 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.227711 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.255461 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tczss\" (UniqueName: \"kubernetes.io/projected/fd50edca-16dd-4aba-9b69-baa808ce0d7e-kube-api-access-tczss\") pod \"cinder-api-0\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.286319 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.426498 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.431060 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" event={"ID":"0336fea5-1cff-472b-b5a6-37adaf489c63","Type":"ContainerStarted","Data":"a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3"} Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.431221 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerName="dnsmasq-dns" containerID="cri-o://a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3" gracePeriod=10 Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.431505 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.460572 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d9bd66cf-nlpm2" event={"ID":"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5","Type":"ContainerStarted","Data":"dbab360a13373b0f107811e32ac3f4e9da16fc04fec31450a8afa527c07e139b"} Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.466796 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" podStartSLOduration=7.4667750250000005 podStartE2EDuration="7.466775025s" podCreationTimestamp="2025-12-02 14:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:23.451136435 +0000 UTC m=+1308.866950286" watchObservedRunningTime="2025-12-02 14:04:23.466775025 +0000 UTC m=+1308.882588876" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.489669 4900 generic.go:334] "Generic (PLEG): container finished" podID="7805c9b7-1be2-499f-b3c9-939245983c97" containerID="17de38699e85eb4b4349d299d77aec5a855cca7af92bd3c9b055200056256850" exitCode=0 Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.489702 4900 generic.go:334] "Generic (PLEG): container finished" podID="7805c9b7-1be2-499f-b3c9-939245983c97" containerID="d088728d94feb3e30f6afea7edbceb3456bcd3c06261216eb67a9d389901df1f" exitCode=2 Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.489829 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerDied","Data":"17de38699e85eb4b4349d299d77aec5a855cca7af92bd3c9b055200056256850"} Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.489857 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerDied","Data":"d088728d94feb3e30f6afea7edbceb3456bcd3c06261216eb67a9d389901df1f"} Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.514873 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" event={"ID":"241c5e6f-d993-4c7a-90a2-1ae1786dbea2","Type":"ContainerStarted","Data":"683ad46b79d8da86e3dc06d5fc634651aa5b590466fe3ab013890ca87d56975d"} Dec 02 14:04:23 crc kubenswrapper[4900]: W1202 14:04:23.536612 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0ec673_a3a9_4554_926e_beadcc2fab09.slice/crio-1224c7152432bc39654b5adde73753b38cee30ea67f587886601f5dbebe0156e WatchSource:0}: Error finding container 1224c7152432bc39654b5adde73753b38cee30ea67f587886601f5dbebe0156e: Status 404 returned error can't find the container with id 1224c7152432bc39654b5adde73753b38cee30ea67f587886601f5dbebe0156e Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.538989 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" podStartSLOduration=4.242246091 podStartE2EDuration="13.5389706s" podCreationTimestamp="2025-12-02 14:04:10 +0000 UTC" firstStartedPulling="2025-12-02 14:04:11.907675921 +0000 UTC m=+1297.323489772" lastFinishedPulling="2025-12-02 14:04:21.20440043 +0000 UTC m=+1306.620214281" observedRunningTime="2025-12-02 14:04:23.536230993 +0000 UTC m=+1308.952044844" watchObservedRunningTime="2025-12-02 14:04:23.5389706 +0000 UTC m=+1308.954784451" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.541849 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6546f8c848-792t5" event={"ID":"41d88fa5-0aa7-4ab7-8089-e073efd31ff0","Type":"ContainerStarted","Data":"06d9d69c290fd56f12d48c09442b22ccd9d32e63a9fc8add3b2502eb112d4a4d"} Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.542953 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.586858 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79db7cb55d-4cs7x" event={"ID":"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571","Type":"ContainerStarted","Data":"32bcc73f011b1518d11ed9404adb699f4c8c5a3bbe633bebd10a6b2b3117dd08"} Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.588003 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.588025 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.589611 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6546f8c848-792t5" podStartSLOduration=7.589587937 podStartE2EDuration="7.589587937s" podCreationTimestamp="2025-12-02 14:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:23.573826293 +0000 UTC m=+1308.989640144" watchObservedRunningTime="2025-12-02 14:04:23.589587937 +0000 UTC m=+1309.005401788" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.665122 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.701430 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79db7cb55d-4cs7x" podStartSLOduration=9.701410549 podStartE2EDuration="9.701410549s" podCreationTimestamp="2025-12-02 14:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:23.616036982 +0000 UTC m=+1309.031850833" watchObservedRunningTime="2025-12-02 14:04:23.701410549 +0000 UTC m=+1309.117224400" Dec 02 14:04:23 crc kubenswrapper[4900]: W1202 14:04:23.818500 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda352e2a1_219a_4b55_9d8f_d4607dd3890c.slice/crio-1148ef1d96c8ce5cd47f283c998f78148fe6fcde2f5300054cd03a8143bd3245 WatchSource:0}: Error finding container 1148ef1d96c8ce5cd47f283c998f78148fe6fcde2f5300054cd03a8143bd3245: Status 404 returned error can't find the container with id 1148ef1d96c8ce5cd47f283c998f78148fe6fcde2f5300054cd03a8143bd3245 Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.821089 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-bhwdt"] Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.897268 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:23 crc kubenswrapper[4900]: I1202 14:04:23.963409 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.202049 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.377296 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-svc\") pod \"0336fea5-1cff-472b-b5a6-37adaf489c63\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.377517 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-config\") pod \"0336fea5-1cff-472b-b5a6-37adaf489c63\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.377690 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-sb\") pod \"0336fea5-1cff-472b-b5a6-37adaf489c63\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.377780 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-nb\") pod \"0336fea5-1cff-472b-b5a6-37adaf489c63\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.377904 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-swift-storage-0\") pod \"0336fea5-1cff-472b-b5a6-37adaf489c63\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.377989 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt57k\" (UniqueName: \"kubernetes.io/projected/0336fea5-1cff-472b-b5a6-37adaf489c63-kube-api-access-dt57k\") pod \"0336fea5-1cff-472b-b5a6-37adaf489c63\" (UID: \"0336fea5-1cff-472b-b5a6-37adaf489c63\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.393029 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0336fea5-1cff-472b-b5a6-37adaf489c63-kube-api-access-dt57k" (OuterVolumeSpecName: "kube-api-access-dt57k") pod "0336fea5-1cff-472b-b5a6-37adaf489c63" (UID: "0336fea5-1cff-472b-b5a6-37adaf489c63"). InnerVolumeSpecName "kube-api-access-dt57k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.474772 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-config" (OuterVolumeSpecName: "config") pod "0336fea5-1cff-472b-b5a6-37adaf489c63" (UID: "0336fea5-1cff-472b-b5a6-37adaf489c63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.485916 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.485945 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt57k\" (UniqueName: \"kubernetes.io/projected/0336fea5-1cff-472b-b5a6-37adaf489c63-kube-api-access-dt57k\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.486490 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0336fea5-1cff-472b-b5a6-37adaf489c63" (UID: "0336fea5-1cff-472b-b5a6-37adaf489c63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.499773 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0336fea5-1cff-472b-b5a6-37adaf489c63" (UID: "0336fea5-1cff-472b-b5a6-37adaf489c63"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.507339 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0336fea5-1cff-472b-b5a6-37adaf489c63" (UID: "0336fea5-1cff-472b-b5a6-37adaf489c63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.545979 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0336fea5-1cff-472b-b5a6-37adaf489c63" (UID: "0336fea5-1cff-472b-b5a6-37adaf489c63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.589681 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.589728 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.589738 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.589747 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0336fea5-1cff-472b-b5a6-37adaf489c63-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.602249 4900 generic.go:334] "Generic (PLEG): container finished" podID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerID="a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3" exitCode=0 Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.602297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" event={"ID":"0336fea5-1cff-472b-b5a6-37adaf489c63","Type":"ContainerDied","Data":"a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.602325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" event={"ID":"0336fea5-1cff-472b-b5a6-37adaf489c63","Type":"ContainerDied","Data":"2f7d62cedf05fa79d040bea72b3a81c2ce95aeadbba8fcc8fe668d9a0deb20ef"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.602340 4900 scope.go:117] "RemoveContainer" containerID="a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.602438 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wd5rb" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.622750 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d9bd66cf-nlpm2" event={"ID":"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5","Type":"ContainerStarted","Data":"ff24395ff17544005ed3b0c813dfed8d4179e1e8e38687a4303ee6b98024dcbd"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.631444 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.641634 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd50edca-16dd-4aba-9b69-baa808ce0d7e","Type":"ContainerStarted","Data":"4c4cab3911416b495ca2298e71527421bfa372a6dd2a2583da5b6434818a602c"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.645711 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wd5rb"] Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.656837 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a0ec673-a3a9-4554-926e-beadcc2fab09","Type":"ContainerStarted","Data":"1224c7152432bc39654b5adde73753b38cee30ea67f587886601f5dbebe0156e"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.661878 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wd5rb"] Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.663884 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d9bd66cf-nlpm2" podStartSLOduration=6.663873466 podStartE2EDuration="6.663873466s" podCreationTimestamp="2025-12-02 14:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:24.653047251 +0000 UTC m=+1310.068861102" watchObservedRunningTime="2025-12-02 14:04:24.663873466 +0000 UTC m=+1310.079687317" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.669190 4900 scope.go:117] "RemoveContainer" containerID="ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.673619 4900 generic.go:334] "Generic (PLEG): container finished" podID="7805c9b7-1be2-499f-b3c9-939245983c97" containerID="d84fb88baaeb29b7d31e12f8b8f180feec2a1d84dd4ce1bc960b0db2744d5068" exitCode=0 Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.673713 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerDied","Data":"d84fb88baaeb29b7d31e12f8b8f180feec2a1d84dd4ce1bc960b0db2744d5068"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.688571 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" event={"ID":"a352e2a1-219a-4b55-9d8f-d4607dd3890c","Type":"ContainerStarted","Data":"1148ef1d96c8ce5cd47f283c998f78148fe6fcde2f5300054cd03a8143bd3245"} Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.700792 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.717382 4900 scope.go:117] "RemoveContainer" containerID="a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3" Dec 02 14:04:24 crc kubenswrapper[4900]: E1202 14:04:24.717867 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3\": container with ID starting with a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3 not found: ID does not exist" containerID="a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.717912 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3"} err="failed to get container status \"a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3\": rpc error: code = NotFound desc = could not find container \"a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3\": container with ID starting with a15fae6aae14b52f3d2f9b9e7136d14c5663764d2fd10dee261c2cc47a4a02c3 not found: ID does not exist" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.717932 4900 scope.go:117] "RemoveContainer" containerID="ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1" Dec 02 14:04:24 crc kubenswrapper[4900]: E1202 14:04:24.718313 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1\": container with ID starting with ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1 not found: ID does not exist" containerID="ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.718331 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1"} err="failed to get container status \"ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1\": rpc error: code = NotFound desc = could not find container \"ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1\": container with ID starting with ca933b7296750ac573e957989e6567ab850a989298f09a9e9e118f858a7885d1 not found: ID does not exist" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794251 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-run-httpd\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794305 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-sg-core-conf-yaml\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794394 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-scripts\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794511 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-combined-ca-bundle\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794556 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-config-data\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794663 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4r4\" (UniqueName: \"kubernetes.io/projected/7805c9b7-1be2-499f-b3c9-939245983c97-kube-api-access-7v4r4\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794689 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-log-httpd\") pod \"7805c9b7-1be2-499f-b3c9-939245983c97\" (UID: \"7805c9b7-1be2-499f-b3c9-939245983c97\") " Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.794740 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.795145 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.795479 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.795502 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7805c9b7-1be2-499f-b3c9-939245983c97-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.804653 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7805c9b7-1be2-499f-b3c9-939245983c97-kube-api-access-7v4r4" (OuterVolumeSpecName: "kube-api-access-7v4r4") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "kube-api-access-7v4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.810047 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-scripts" (OuterVolumeSpecName: "scripts") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.827070 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.896695 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4r4\" (UniqueName: \"kubernetes.io/projected/7805c9b7-1be2-499f-b3c9-939245983c97-kube-api-access-7v4r4\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.896722 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.896731 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.900779 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-config-data" (OuterVolumeSpecName: "config-data") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.900894 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7805c9b7-1be2-499f-b3c9-939245983c97" (UID: "7805c9b7-1be2-499f-b3c9-939245983c97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.925303 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" path="/var/lib/kubelet/pods/0336fea5-1cff-472b-b5a6-37adaf489c63/volumes" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.998418 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4900]: I1202 14:04:24.998461 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7805c9b7-1be2-499f-b3c9-939245983c97-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.697353 4900 generic.go:334] "Generic (PLEG): container finished" podID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerID="3747c7dcd5e0e5831178f5667e047c7cb17f5cffb0c9a78b64fbfa803f7af3e0" exitCode=0 Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.697504 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" event={"ID":"a352e2a1-219a-4b55-9d8f-d4607dd3890c","Type":"ContainerDied","Data":"3747c7dcd5e0e5831178f5667e047c7cb17f5cffb0c9a78b64fbfa803f7af3e0"} Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.700724 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd50edca-16dd-4aba-9b69-baa808ce0d7e","Type":"ContainerStarted","Data":"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261"} Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.705992 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.706508 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7805c9b7-1be2-499f-b3c9-939245983c97","Type":"ContainerDied","Data":"32e17f7ee317c07a8c4b2696adddd3ee61ba47d2c3bd4ceb1b0d5f2891900d79"} Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.706541 4900 scope.go:117] "RemoveContainer" containerID="17de38699e85eb4b4349d299d77aec5a855cca7af92bd3c9b055200056256850" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.718876 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.870959 4900 scope.go:117] "RemoveContainer" containerID="d088728d94feb3e30f6afea7edbceb3456bcd3c06261216eb67a9d389901df1f" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.946399 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.950872 4900 scope.go:117] "RemoveContainer" containerID="d84fb88baaeb29b7d31e12f8b8f180feec2a1d84dd4ce1bc960b0db2744d5068" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.960797 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.983478 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:25 crc kubenswrapper[4900]: E1202 14:04:25.996959 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerName="init" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.996988 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerName="init" Dec 02 14:04:25 crc kubenswrapper[4900]: E1202 14:04:25.997012 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="sg-core" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997019 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="sg-core" Dec 02 14:04:25 crc kubenswrapper[4900]: E1202 14:04:25.997042 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="proxy-httpd" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997049 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="proxy-httpd" Dec 02 14:04:25 crc kubenswrapper[4900]: E1202 14:04:25.997061 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="ceilometer-notification-agent" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997075 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="ceilometer-notification-agent" Dec 02 14:04:25 crc kubenswrapper[4900]: E1202 14:04:25.997085 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerName="dnsmasq-dns" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997093 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerName="dnsmasq-dns" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997413 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="proxy-httpd" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997449 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="ceilometer-notification-agent" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997463 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0336fea5-1cff-472b-b5a6-37adaf489c63" containerName="dnsmasq-dns" Dec 02 14:04:25 crc kubenswrapper[4900]: I1202 14:04:25.997476 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" containerName="sg-core" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.004399 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.006574 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.014257 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.016272 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144011 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144363 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-log-httpd\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144403 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-run-httpd\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-scripts\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144452 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bns74\" (UniqueName: \"kubernetes.io/projected/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-kube-api-access-bns74\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144477 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-config-data\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.144573 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245598 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-scripts\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245654 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bns74\" (UniqueName: \"kubernetes.io/projected/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-kube-api-access-bns74\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245680 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-config-data\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245786 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245823 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245841 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-log-httpd\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.245874 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-run-httpd\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.246280 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-run-httpd\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.247170 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-log-httpd\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.254373 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.254515 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-config-data\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.258396 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-scripts\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.264295 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.265598 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bns74\" (UniqueName: \"kubernetes.io/projected/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-kube-api-access-bns74\") pod \"ceilometer-0\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.383296 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.718769 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd50edca-16dd-4aba-9b69-baa808ce0d7e","Type":"ContainerStarted","Data":"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5"} Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.719655 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api-log" containerID="cri-o://f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261" gracePeriod=30 Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.719755 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.720140 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api" containerID="cri-o://81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5" gracePeriod=30 Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.724339 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a0ec673-a3a9-4554-926e-beadcc2fab09","Type":"ContainerStarted","Data":"4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb"} Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.734781 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" event={"ID":"a352e2a1-219a-4b55-9d8f-d4607dd3890c","Type":"ContainerStarted","Data":"86a3abf1966efd90643fdf0d03f758199c3718b6ab23ca089293aa663fb76b77"} Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.735395 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.749333 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.749313335 podStartE2EDuration="4.749313335s" podCreationTimestamp="2025-12-02 14:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:26.743945604 +0000 UTC m=+1312.159759455" watchObservedRunningTime="2025-12-02 14:04:26.749313335 +0000 UTC m=+1312.165127186" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.764035 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" podStartSLOduration=4.76402105 podStartE2EDuration="4.76402105s" podCreationTimestamp="2025-12-02 14:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:26.759177363 +0000 UTC m=+1312.174991214" watchObservedRunningTime="2025-12-02 14:04:26.76402105 +0000 UTC m=+1312.179834901" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.925357 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7805c9b7-1be2-499f-b3c9-939245983c97" path="/var/lib/kubelet/pods/7805c9b7-1be2-499f-b3c9-939245983c97/volumes" Dec 02 14:04:26 crc kubenswrapper[4900]: I1202 14:04:26.926022 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:26 crc kubenswrapper[4900]: W1202 14:04:26.984629 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc77cc0c8_bd3f_46e6_84fc_0d8d329e3f73.slice/crio-36e7d9682997a0ea4fa05e7beb45a2c623fa16bb090cec86e3c3cc7f8c88aed7 WatchSource:0}: Error finding container 36e7d9682997a0ea4fa05e7beb45a2c623fa16bb090cec86e3c3cc7f8c88aed7: Status 404 returned error can't find the container with id 36e7d9682997a0ea4fa05e7beb45a2c623fa16bb090cec86e3c3cc7f8c88aed7 Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.508489 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570072 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-scripts\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570140 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data-custom\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570174 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tczss\" (UniqueName: \"kubernetes.io/projected/fd50edca-16dd-4aba-9b69-baa808ce0d7e-kube-api-access-tczss\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570201 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd50edca-16dd-4aba-9b69-baa808ce0d7e-logs\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570239 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd50edca-16dd-4aba-9b69-baa808ce0d7e-etc-machine-id\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570298 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-combined-ca-bundle\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570327 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data\") pod \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\" (UID: \"fd50edca-16dd-4aba-9b69-baa808ce0d7e\") " Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.570763 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50edca-16dd-4aba-9b69-baa808ce0d7e-logs" (OuterVolumeSpecName: "logs") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.571206 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd50edca-16dd-4aba-9b69-baa808ce0d7e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.588874 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.588967 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd50edca-16dd-4aba-9b69-baa808ce0d7e-kube-api-access-tczss" (OuterVolumeSpecName: "kube-api-access-tczss") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "kube-api-access-tczss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.615813 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-scripts" (OuterVolumeSpecName: "scripts") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.618999 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.643035 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data" (OuterVolumeSpecName: "config-data") pod "fd50edca-16dd-4aba-9b69-baa808ce0d7e" (UID: "fd50edca-16dd-4aba-9b69-baa808ce0d7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672447 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672597 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672616 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tczss\" (UniqueName: \"kubernetes.io/projected/fd50edca-16dd-4aba-9b69-baa808ce0d7e-kube-api-access-tczss\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672627 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd50edca-16dd-4aba-9b69-baa808ce0d7e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672638 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd50edca-16dd-4aba-9b69-baa808ce0d7e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672667 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.672677 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd50edca-16dd-4aba-9b69-baa808ce0d7e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.779714 4900 generic.go:334] "Generic (PLEG): container finished" podID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerID="81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5" exitCode=0 Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.780748 4900 generic.go:334] "Generic (PLEG): container finished" podID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerID="f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261" exitCode=143 Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.779817 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd50edca-16dd-4aba-9b69-baa808ce0d7e","Type":"ContainerDied","Data":"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5"} Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.780944 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd50edca-16dd-4aba-9b69-baa808ce0d7e","Type":"ContainerDied","Data":"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261"} Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.780957 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd50edca-16dd-4aba-9b69-baa808ce0d7e","Type":"ContainerDied","Data":"4c4cab3911416b495ca2298e71527421bfa372a6dd2a2583da5b6434818a602c"} Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.780973 4900 scope.go:117] "RemoveContainer" containerID="81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.779793 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.783922 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a0ec673-a3a9-4554-926e-beadcc2fab09","Type":"ContainerStarted","Data":"e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952"} Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.785907 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerStarted","Data":"36e7d9682997a0ea4fa05e7beb45a2c623fa16bb090cec86e3c3cc7f8c88aed7"} Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.806094 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.358621572 podStartE2EDuration="5.80607816s" podCreationTimestamp="2025-12-02 14:04:22 +0000 UTC" firstStartedPulling="2025-12-02 14:04:23.544258499 +0000 UTC m=+1308.960072350" lastFinishedPulling="2025-12-02 14:04:24.991715087 +0000 UTC m=+1310.407528938" observedRunningTime="2025-12-02 14:04:27.802258182 +0000 UTC m=+1313.218072033" watchObservedRunningTime="2025-12-02 14:04:27.80607816 +0000 UTC m=+1313.221892011" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.809394 4900 scope.go:117] "RemoveContainer" containerID="f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.836807 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.844328 4900 scope.go:117] "RemoveContainer" containerID="81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5" Dec 02 14:04:27 crc kubenswrapper[4900]: E1202 14:04:27.846038 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5\": container with ID starting with 81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5 not found: ID does not exist" containerID="81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.846085 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5"} err="failed to get container status \"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5\": rpc error: code = NotFound desc = could not find container \"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5\": container with ID starting with 81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5 not found: ID does not exist" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.846112 4900 scope.go:117] "RemoveContainer" containerID="f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261" Dec 02 14:04:27 crc kubenswrapper[4900]: E1202 14:04:27.849277 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261\": container with ID starting with f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261 not found: ID does not exist" containerID="f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.849437 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261"} err="failed to get container status \"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261\": rpc error: code = NotFound desc = could not find container \"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261\": container with ID starting with f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261 not found: ID does not exist" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.849560 4900 scope.go:117] "RemoveContainer" containerID="81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.849705 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.850444 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5"} err="failed to get container status \"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5\": rpc error: code = NotFound desc = could not find container \"81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5\": container with ID starting with 81efbdc72b8fd13261c3e01398c79cda5761434468fd4a3bb720835488cd79b5 not found: ID does not exist" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.850477 4900 scope.go:117] "RemoveContainer" containerID="f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.854198 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261"} err="failed to get container status \"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261\": rpc error: code = NotFound desc = could not find container \"f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261\": container with ID starting with f7adc76bec4ff321790bb8e0650aff67c0707c1e54b05cc1b232ecc8207dd261 not found: ID does not exist" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.870615 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:27 crc kubenswrapper[4900]: E1202 14:04:27.871108 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api-log" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.871125 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api-log" Dec 02 14:04:27 crc kubenswrapper[4900]: E1202 14:04:27.871149 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.871155 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.871349 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.871384 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" containerName="cinder-api-log" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.872353 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.875127 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.875159 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.875363 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.880883 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.977405 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-scripts\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.977540 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data-custom\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.977575 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95rj\" (UniqueName: \"kubernetes.io/projected/557a84eb-0882-44c1-b4db-7c8a19e1303d-kube-api-access-g95rj\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.977807 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.977872 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.978009 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.978102 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/557a84eb-0882-44c1-b4db-7c8a19e1303d-logs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.978173 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:27 crc kubenswrapper[4900]: I1202 14:04:27.978238 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/557a84eb-0882-44c1-b4db-7c8a19e1303d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.050103 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.080716 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/557a84eb-0882-44c1-b4db-7c8a19e1303d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.087764 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-scripts\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088082 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data-custom\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088125 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95rj\" (UniqueName: \"kubernetes.io/projected/557a84eb-0882-44c1-b4db-7c8a19e1303d-kube-api-access-g95rj\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088207 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088228 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088271 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088303 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/557a84eb-0882-44c1-b4db-7c8a19e1303d-logs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.088320 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.093137 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.093768 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.094847 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data-custom\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.095300 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/557a84eb-0882-44c1-b4db-7c8a19e1303d-logs\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.095440 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/557a84eb-0882-44c1-b4db-7c8a19e1303d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.096297 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-scripts\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.096804 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.097517 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.114808 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95rj\" (UniqueName: \"kubernetes.io/projected/557a84eb-0882-44c1-b4db-7c8a19e1303d-kube-api-access-g95rj\") pod \"cinder-api-0\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.189855 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.712225 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:04:28 crc kubenswrapper[4900]: W1202 14:04:28.726435 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod557a84eb_0882_44c1_b4db_7c8a19e1303d.slice/crio-5a3f65bd50c619be17abb40ae267765cd3c4242d17696dfc1b11ad8e4c6cff2f WatchSource:0}: Error finding container 5a3f65bd50c619be17abb40ae267765cd3c4242d17696dfc1b11ad8e4c6cff2f: Status 404 returned error can't find the container with id 5a3f65bd50c619be17abb40ae267765cd3c4242d17696dfc1b11ad8e4c6cff2f Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.795918 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerStarted","Data":"3504e1e69825f85aa3759da81f54ec9e34ea36cd6893dc197013299a5cd1ff75"} Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.799719 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"557a84eb-0882-44c1-b4db-7c8a19e1303d","Type":"ContainerStarted","Data":"5a3f65bd50c619be17abb40ae267765cd3c4242d17696dfc1b11ad8e4c6cff2f"} Dec 02 14:04:28 crc kubenswrapper[4900]: I1202 14:04:28.922999 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd50edca-16dd-4aba-9b69-baa808ce0d7e" path="/var/lib/kubelet/pods/fd50edca-16dd-4aba-9b69-baa808ce0d7e/volumes" Dec 02 14:04:29 crc kubenswrapper[4900]: I1202 14:04:29.807713 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"557a84eb-0882-44c1-b4db-7c8a19e1303d","Type":"ContainerStarted","Data":"780db50c9c1ada3d5ce136afa95e168fc995789ee6f6731c4c9529970d7dfd6e"} Dec 02 14:04:29 crc kubenswrapper[4900]: I1202 14:04:29.809754 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerStarted","Data":"863afa87ccdad02084c44780deb2d0531ba616fca5c768a7bf95fcaecd10bc57"} Dec 02 14:04:30 crc kubenswrapper[4900]: I1202 14:04:30.821266 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"557a84eb-0882-44c1-b4db-7c8a19e1303d","Type":"ContainerStarted","Data":"440f785e6ac340819ae403625dc734fd43ed1abbd0b52db9080939d07419abce"} Dec 02 14:04:30 crc kubenswrapper[4900]: I1202 14:04:30.821486 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 14:04:30 crc kubenswrapper[4900]: I1202 14:04:30.845987 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.845961989 podStartE2EDuration="3.845961989s" podCreationTimestamp="2025-12-02 14:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:30.839778985 +0000 UTC m=+1316.255592836" watchObservedRunningTime="2025-12-02 14:04:30.845961989 +0000 UTC m=+1316.261775840" Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.112196 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.269144 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.353876 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b9bb78f94-sjnpb"] Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.354634 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b9bb78f94-sjnpb" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api-log" containerID="cri-o://d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e" gracePeriod=30 Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.354824 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b9bb78f94-sjnpb" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api" containerID="cri-o://8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97" gracePeriod=30 Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.831220 4900 generic.go:334] "Generic (PLEG): container finished" podID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerID="d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e" exitCode=143 Dec 02 14:04:31 crc kubenswrapper[4900]: I1202 14:04:31.831288 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9bb78f94-sjnpb" event={"ID":"245e38af-1ae6-461f-8422-4ec8fed4f781","Type":"ContainerDied","Data":"d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e"} Dec 02 14:04:32 crc kubenswrapper[4900]: I1202 14:04:32.842136 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerStarted","Data":"d7a6e6e60fd08480f13d8ae6bd4325ac1afbde227364b9dcab543204bdda319d"} Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.015008 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.165789 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.230044 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zzsgk"] Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.230255 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" containerName="dnsmasq-dns" containerID="cri-o://902d8e06a92c658dceedff2f53c9c8b32a333f448fa97c1a285650bfa11663a7" gracePeriod=10 Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.386166 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.438919 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.862935 4900 generic.go:334] "Generic (PLEG): container finished" podID="725f3563-28dc-40f8-b01e-ecc75598997d" containerID="902d8e06a92c658dceedff2f53c9c8b32a333f448fa97c1a285650bfa11663a7" exitCode=0 Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.863167 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="cinder-scheduler" containerID="cri-o://4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb" gracePeriod=30 Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.863251 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="probe" containerID="cri-o://e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952" gracePeriod=30 Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.863264 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" event={"ID":"725f3563-28dc-40f8-b01e-ecc75598997d","Type":"ContainerDied","Data":"902d8e06a92c658dceedff2f53c9c8b32a333f448fa97c1a285650bfa11663a7"} Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.877774 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.878984 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.897865 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.898216 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d56lp" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.898389 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 02 14:04:33 crc kubenswrapper[4900]: I1202 14:04:33.933255 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.034736 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.034791 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.034890 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xc8r\" (UniqueName: \"kubernetes.io/projected/6ff6dcaf-b619-4169-9b36-81ee92264d71-kube-api-access-7xc8r\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.034960 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config-secret\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.136799 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.136849 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.136884 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xc8r\" (UniqueName: \"kubernetes.io/projected/6ff6dcaf-b619-4169-9b36-81ee92264d71-kube-api-access-7xc8r\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.136930 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config-secret\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.141614 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.145037 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.160244 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xc8r\" (UniqueName: \"kubernetes.io/projected/6ff6dcaf-b619-4169-9b36-81ee92264d71-kube-api-access-7xc8r\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.163226 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config-secret\") pod \"openstackclient\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.183317 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.340122 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-nb\") pod \"725f3563-28dc-40f8-b01e-ecc75598997d\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.340234 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fppn\" (UniqueName: \"kubernetes.io/projected/725f3563-28dc-40f8-b01e-ecc75598997d-kube-api-access-8fppn\") pod \"725f3563-28dc-40f8-b01e-ecc75598997d\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.340320 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-config\") pod \"725f3563-28dc-40f8-b01e-ecc75598997d\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.340566 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-svc\") pod \"725f3563-28dc-40f8-b01e-ecc75598997d\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.340605 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-swift-storage-0\") pod \"725f3563-28dc-40f8-b01e-ecc75598997d\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.340634 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-sb\") pod \"725f3563-28dc-40f8-b01e-ecc75598997d\" (UID: \"725f3563-28dc-40f8-b01e-ecc75598997d\") " Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.347193 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725f3563-28dc-40f8-b01e-ecc75598997d-kube-api-access-8fppn" (OuterVolumeSpecName: "kube-api-access-8fppn") pod "725f3563-28dc-40f8-b01e-ecc75598997d" (UID: "725f3563-28dc-40f8-b01e-ecc75598997d"). InnerVolumeSpecName "kube-api-access-8fppn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.392485 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "725f3563-28dc-40f8-b01e-ecc75598997d" (UID: "725f3563-28dc-40f8-b01e-ecc75598997d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.400139 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "725f3563-28dc-40f8-b01e-ecc75598997d" (UID: "725f3563-28dc-40f8-b01e-ecc75598997d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.407259 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "725f3563-28dc-40f8-b01e-ecc75598997d" (UID: "725f3563-28dc-40f8-b01e-ecc75598997d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.405728 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-config" (OuterVolumeSpecName: "config") pod "725f3563-28dc-40f8-b01e-ecc75598997d" (UID: "725f3563-28dc-40f8-b01e-ecc75598997d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.443609 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.443658 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.443669 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.443679 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.443687 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fppn\" (UniqueName: \"kubernetes.io/projected/725f3563-28dc-40f8-b01e-ecc75598997d-kube-api-access-8fppn\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.447096 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "725f3563-28dc-40f8-b01e-ecc75598997d" (UID: "725f3563-28dc-40f8-b01e-ecc75598997d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.447159 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.544904 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/725f3563-28dc-40f8-b01e-ecc75598997d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.820563 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b9bb78f94-sjnpb" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:56688->10.217.0.155:9311: read: connection reset by peer" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.820886 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b9bb78f94-sjnpb" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:56692->10.217.0.155:9311: read: connection reset by peer" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.892825 4900 generic.go:334] "Generic (PLEG): container finished" podID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerID="e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952" exitCode=0 Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.892934 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a0ec673-a3a9-4554-926e-beadcc2fab09","Type":"ContainerDied","Data":"e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952"} Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.901008 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" event={"ID":"725f3563-28dc-40f8-b01e-ecc75598997d","Type":"ContainerDied","Data":"17c1cedd5d4bd074d4849b6e7cd66d0edbe806286f94a09c7cb350301fb92745"} Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.901078 4900 scope.go:117] "RemoveContainer" containerID="902d8e06a92c658dceedff2f53c9c8b32a333f448fa97c1a285650bfa11663a7" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.901213 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zzsgk" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.904757 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.934543 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerStarted","Data":"14ebf9194e3548cc1e22d6dc2af7a081c7df02d615a2ca3b6660ba75e2e489c2"} Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.934585 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:04:34 crc kubenswrapper[4900]: I1202 14:04:34.995327 4900 scope.go:117] "RemoveContainer" containerID="36175c4d5dbc391cf9b4bf3670fd6133585a737087ee3e99bd3a77f35d2369b2" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.003977 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.13959236 podStartE2EDuration="10.003957823s" podCreationTimestamp="2025-12-02 14:04:25 +0000 UTC" firstStartedPulling="2025-12-02 14:04:26.986913092 +0000 UTC m=+1312.402726943" lastFinishedPulling="2025-12-02 14:04:33.851278555 +0000 UTC m=+1319.267092406" observedRunningTime="2025-12-02 14:04:34.990837733 +0000 UTC m=+1320.406651594" watchObservedRunningTime="2025-12-02 14:04:35.003957823 +0000 UTC m=+1320.419771674" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.016423 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zzsgk"] Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.027456 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zzsgk"] Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.238075 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.361480 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245e38af-1ae6-461f-8422-4ec8fed4f781-logs\") pod \"245e38af-1ae6-461f-8422-4ec8fed4f781\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.361654 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data-custom\") pod \"245e38af-1ae6-461f-8422-4ec8fed4f781\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.361693 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data\") pod \"245e38af-1ae6-461f-8422-4ec8fed4f781\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.361729 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2fnc\" (UniqueName: \"kubernetes.io/projected/245e38af-1ae6-461f-8422-4ec8fed4f781-kube-api-access-d2fnc\") pod \"245e38af-1ae6-461f-8422-4ec8fed4f781\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.361755 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-combined-ca-bundle\") pod \"245e38af-1ae6-461f-8422-4ec8fed4f781\" (UID: \"245e38af-1ae6-461f-8422-4ec8fed4f781\") " Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.361987 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/245e38af-1ae6-461f-8422-4ec8fed4f781-logs" (OuterVolumeSpecName: "logs") pod "245e38af-1ae6-461f-8422-4ec8fed4f781" (UID: "245e38af-1ae6-461f-8422-4ec8fed4f781"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.362394 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245e38af-1ae6-461f-8422-4ec8fed4f781-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.368274 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "245e38af-1ae6-461f-8422-4ec8fed4f781" (UID: "245e38af-1ae6-461f-8422-4ec8fed4f781"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.368397 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245e38af-1ae6-461f-8422-4ec8fed4f781-kube-api-access-d2fnc" (OuterVolumeSpecName: "kube-api-access-d2fnc") pod "245e38af-1ae6-461f-8422-4ec8fed4f781" (UID: "245e38af-1ae6-461f-8422-4ec8fed4f781"). InnerVolumeSpecName "kube-api-access-d2fnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.410878 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "245e38af-1ae6-461f-8422-4ec8fed4f781" (UID: "245e38af-1ae6-461f-8422-4ec8fed4f781"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.419401 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data" (OuterVolumeSpecName: "config-data") pod "245e38af-1ae6-461f-8422-4ec8fed4f781" (UID: "245e38af-1ae6-461f-8422-4ec8fed4f781"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.463809 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.463841 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.463850 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2fnc\" (UniqueName: \"kubernetes.io/projected/245e38af-1ae6-461f-8422-4ec8fed4f781-kube-api-access-d2fnc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.463861 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245e38af-1ae6-461f-8422-4ec8fed4f781-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.938829 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6ff6dcaf-b619-4169-9b36-81ee92264d71","Type":"ContainerStarted","Data":"0d760eef69c9ace0f6dd56e26da2135b2d7f748c7dd33a73b8fcbfcd158512f0"} Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.942246 4900 generic.go:334] "Generic (PLEG): container finished" podID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerID="8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97" exitCode=0 Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.942332 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9bb78f94-sjnpb" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.942337 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9bb78f94-sjnpb" event={"ID":"245e38af-1ae6-461f-8422-4ec8fed4f781","Type":"ContainerDied","Data":"8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97"} Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.942521 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9bb78f94-sjnpb" event={"ID":"245e38af-1ae6-461f-8422-4ec8fed4f781","Type":"ContainerDied","Data":"e4b127f5bd8662a9e18fa2dd24a0e94df2cb6d927f6a1192126e381bc3b0f4f9"} Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.942565 4900 scope.go:117] "RemoveContainer" containerID="8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.985114 4900 scope.go:117] "RemoveContainer" containerID="d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e" Dec 02 14:04:35 crc kubenswrapper[4900]: I1202 14:04:35.990257 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b9bb78f94-sjnpb"] Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.002204 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b9bb78f94-sjnpb"] Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.026947 4900 scope.go:117] "RemoveContainer" containerID="8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97" Dec 02 14:04:36 crc kubenswrapper[4900]: E1202 14:04:36.027493 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97\": container with ID starting with 8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97 not found: ID does not exist" containerID="8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97" Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.027527 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97"} err="failed to get container status \"8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97\": rpc error: code = NotFound desc = could not find container \"8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97\": container with ID starting with 8284f147302fd667ae19f136d38fdc08a93b2a4a0d9a566f2a4d9b6b06b83a97 not found: ID does not exist" Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.027552 4900 scope.go:117] "RemoveContainer" containerID="d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e" Dec 02 14:04:36 crc kubenswrapper[4900]: E1202 14:04:36.027916 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e\": container with ID starting with d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e not found: ID does not exist" containerID="d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e" Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.027989 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e"} err="failed to get container status \"d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e\": rpc error: code = NotFound desc = could not find container \"d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e\": container with ID starting with d88fa83e141f727c0e8e23c68edce9caf7b6b0c7914328a3039578a9e823bb9e not found: ID does not exist" Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.932442 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" path="/var/lib/kubelet/pods/245e38af-1ae6-461f-8422-4ec8fed4f781/volumes" Dec 02 14:04:36 crc kubenswrapper[4900]: I1202 14:04:36.933672 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" path="/var/lib/kubelet/pods/725f3563-28dc-40f8-b01e-ecc75598997d/volumes" Dec 02 14:04:37 crc kubenswrapper[4900]: E1202 14:04:37.500628 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0ec673_a3a9_4554_926e_beadcc2fab09.slice/crio-conmon-4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:04:37 crc kubenswrapper[4900]: I1202 14:04:37.843056 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:04:37 crc kubenswrapper[4900]: I1202 14:04:37.973257 4900 generic.go:334] "Generic (PLEG): container finished" podID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerID="4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb" exitCode=0 Dec 02 14:04:37 crc kubenswrapper[4900]: I1202 14:04:37.973396 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a0ec673-a3a9-4554-926e-beadcc2fab09","Type":"ContainerDied","Data":"4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb"} Dec 02 14:04:37 crc kubenswrapper[4900]: I1202 14:04:37.973957 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7a0ec673-a3a9-4554-926e-beadcc2fab09","Type":"ContainerDied","Data":"1224c7152432bc39654b5adde73753b38cee30ea67f587886601f5dbebe0156e"} Dec 02 14:04:37 crc kubenswrapper[4900]: I1202 14:04:37.973445 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:04:37 crc kubenswrapper[4900]: I1202 14:04:37.973989 4900 scope.go:117] "RemoveContainer" containerID="e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:37.998557 4900 scope.go:117] "RemoveContainer" containerID="4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.012912 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8dkj\" (UniqueName: \"kubernetes.io/projected/7a0ec673-a3a9-4554-926e-beadcc2fab09-kube-api-access-n8dkj\") pod \"7a0ec673-a3a9-4554-926e-beadcc2fab09\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.013064 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data-custom\") pod \"7a0ec673-a3a9-4554-926e-beadcc2fab09\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.013098 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a0ec673-a3a9-4554-926e-beadcc2fab09-etc-machine-id\") pod \"7a0ec673-a3a9-4554-926e-beadcc2fab09\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.013140 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data\") pod \"7a0ec673-a3a9-4554-926e-beadcc2fab09\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.013171 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-combined-ca-bundle\") pod \"7a0ec673-a3a9-4554-926e-beadcc2fab09\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.013242 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-scripts\") pod \"7a0ec673-a3a9-4554-926e-beadcc2fab09\" (UID: \"7a0ec673-a3a9-4554-926e-beadcc2fab09\") " Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.013240 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a0ec673-a3a9-4554-926e-beadcc2fab09-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7a0ec673-a3a9-4554-926e-beadcc2fab09" (UID: "7a0ec673-a3a9-4554-926e-beadcc2fab09"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.021547 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-scripts" (OuterVolumeSpecName: "scripts") pod "7a0ec673-a3a9-4554-926e-beadcc2fab09" (UID: "7a0ec673-a3a9-4554-926e-beadcc2fab09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.021598 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a0ec673-a3a9-4554-926e-beadcc2fab09" (UID: "7a0ec673-a3a9-4554-926e-beadcc2fab09"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.021821 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0ec673-a3a9-4554-926e-beadcc2fab09-kube-api-access-n8dkj" (OuterVolumeSpecName: "kube-api-access-n8dkj") pod "7a0ec673-a3a9-4554-926e-beadcc2fab09" (UID: "7a0ec673-a3a9-4554-926e-beadcc2fab09"). InnerVolumeSpecName "kube-api-access-n8dkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.071523 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a0ec673-a3a9-4554-926e-beadcc2fab09" (UID: "7a0ec673-a3a9-4554-926e-beadcc2fab09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.114309 4900 scope.go:117] "RemoveContainer" containerID="e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.114942 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8dkj\" (UniqueName: \"kubernetes.io/projected/7a0ec673-a3a9-4554-926e-beadcc2fab09-kube-api-access-n8dkj\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.114974 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.114983 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7a0ec673-a3a9-4554-926e-beadcc2fab09-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.114992 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.115000 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.115387 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952\": container with ID starting with e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952 not found: ID does not exist" containerID="e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.115420 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952"} err="failed to get container status \"e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952\": rpc error: code = NotFound desc = could not find container \"e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952\": container with ID starting with e9315163ef8801315462c049826edb8c07847413024137676f2881521c20f952 not found: ID does not exist" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.115447 4900 scope.go:117] "RemoveContainer" containerID="4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.116326 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb\": container with ID starting with 4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb not found: ID does not exist" containerID="4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.116369 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb"} err="failed to get container status \"4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb\": rpc error: code = NotFound desc = could not find container \"4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb\": container with ID starting with 4c6e09eee2bbe7a9b44145f35596701abfb62bcc99fed58408018a6d46315fcb not found: ID does not exist" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.116818 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data" (OuterVolumeSpecName: "config-data") pod "7a0ec673-a3a9-4554-926e-beadcc2fab09" (UID: "7a0ec673-a3a9-4554-926e-beadcc2fab09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.216930 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0ec673-a3a9-4554-926e-beadcc2fab09-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.347828 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.361824 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.374515 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.374921 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api-log" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.374938 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api-log" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.374952 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" containerName="init" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.374959 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" containerName="init" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.374977 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="probe" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.374983 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="probe" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.374997 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="cinder-scheduler" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375002 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="cinder-scheduler" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.375015 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" containerName="dnsmasq-dns" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375021 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" containerName="dnsmasq-dns" Dec 02 14:04:38 crc kubenswrapper[4900]: E1202 14:04:38.375042 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375049 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375204 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="725f3563-28dc-40f8-b01e-ecc75598997d" containerName="dnsmasq-dns" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375220 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375236 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="245e38af-1ae6-461f-8422-4ec8fed4f781" containerName="barbican-api-log" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375247 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="probe" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.375261 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" containerName="cinder-scheduler" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.376183 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.392256 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.392424 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.522548 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.522611 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgphn\" (UniqueName: \"kubernetes.io/projected/c49f994e-0d9a-4312-995d-d84d93f31f01-kube-api-access-jgphn\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.522690 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-scripts\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.522727 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c49f994e-0d9a-4312-995d-d84d93f31f01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.522801 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.522826 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.624788 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c49f994e-0d9a-4312-995d-d84d93f31f01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.624880 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.624924 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.625618 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.625678 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgphn\" (UniqueName: \"kubernetes.io/projected/c49f994e-0d9a-4312-995d-d84d93f31f01-kube-api-access-jgphn\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.625721 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-scripts\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.627806 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c49f994e-0d9a-4312-995d-d84d93f31f01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.630409 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.632634 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.633445 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.639691 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-scripts\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.646630 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgphn\" (UniqueName: \"kubernetes.io/projected/c49f994e-0d9a-4312-995d-d84d93f31f01-kube-api-access-jgphn\") pod \"cinder-scheduler-0\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.695531 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:04:38 crc kubenswrapper[4900]: I1202 14:04:38.928247 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0ec673-a3a9-4554-926e-beadcc2fab09" path="/var/lib/kubelet/pods/7a0ec673-a3a9-4554-926e-beadcc2fab09/volumes" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.213585 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:04:39 crc kubenswrapper[4900]: W1202 14:04:39.217786 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc49f994e_0d9a_4312_995d_d84d93f31f01.slice/crio-91bf73d9fd7b730de83c0c5bdec437554ab5e64ee9f986dca6b070f5d83dba2c WatchSource:0}: Error finding container 91bf73d9fd7b730de83c0c5bdec437554ab5e64ee9f986dca6b070f5d83dba2c: Status 404 returned error can't find the container with id 91bf73d9fd7b730de83c0c5bdec437554ab5e64ee9f986dca6b070f5d83dba2c Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.495260 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5b7cc7fc75-qdmm5"] Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.496899 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.507568 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.507764 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.507942 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.510198 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b7cc7fc75-qdmm5"] Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648779 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-public-tls-certs\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648831 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-internal-tls-certs\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648880 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-log-httpd\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648897 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-combined-ca-bundle\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648923 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-run-httpd\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648942 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-config-data\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.648992 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkn7z\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-kube-api-access-lkn7z\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.649019 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-etc-swift\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750126 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-combined-ca-bundle\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750463 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-log-httpd\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750497 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-run-httpd\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750518 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-config-data\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750572 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkn7z\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-kube-api-access-lkn7z\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750599 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-etc-swift\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750670 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-public-tls-certs\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.750700 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-internal-tls-certs\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.751778 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-log-httpd\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.753462 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-run-httpd\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.754275 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-combined-ca-bundle\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.755599 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-internal-tls-certs\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.758239 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-public-tls-certs\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.762081 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-etc-swift\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.763331 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-config-data\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.769820 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkn7z\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-kube-api-access-lkn7z\") pod \"swift-proxy-5b7cc7fc75-qdmm5\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:39 crc kubenswrapper[4900]: I1202 14:04:39.830950 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.059736 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c49f994e-0d9a-4312-995d-d84d93f31f01","Type":"ContainerStarted","Data":"7e2a5e62fdfd6d58e261c3a59e316fbb35eb6ecbeecd1f0c5b424484dcb2b5d5"} Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.060003 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c49f994e-0d9a-4312-995d-d84d93f31f01","Type":"ContainerStarted","Data":"91bf73d9fd7b730de83c0c5bdec437554ab5e64ee9f986dca6b070f5d83dba2c"} Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.426399 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-s9gw6"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.428095 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.450445 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s9gw6"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.471166 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.544284 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b7cc7fc75-qdmm5"] Dec 02 14:04:40 crc kubenswrapper[4900]: W1202 14:04:40.550771 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00825975_35eb_46d6_8aeb_753170564467.slice/crio-c51378899bf00d1d35448264548d97a29edb696e23a0c7d82dc9c0e5f31cabc0 WatchSource:0}: Error finding container c51378899bf00d1d35448264548d97a29edb696e23a0c7d82dc9c0e5f31cabc0: Status 404 returned error can't find the container with id c51378899bf00d1d35448264548d97a29edb696e23a0c7d82dc9c0e5f31cabc0 Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.555723 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lzg8p"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.557029 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.560878 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b12b-account-create-update-26q6g"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.561776 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.566997 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.567892 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26491a11-ebc3-4790-af52-2459d18e0a5b-operator-scripts\") pod \"nova-api-db-create-s9gw6\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.568064 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbcq\" (UniqueName: \"kubernetes.io/projected/26491a11-ebc3-4790-af52-2459d18e0a5b-kube-api-access-dwbcq\") pod \"nova-api-db-create-s9gw6\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.573712 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lzg8p"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.588192 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b12b-account-create-update-26q6g"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.669809 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26491a11-ebc3-4790-af52-2459d18e0a5b-operator-scripts\") pod \"nova-api-db-create-s9gw6\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.669885 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czf6\" (UniqueName: \"kubernetes.io/projected/ac832e52-24a1-4e72-84ad-47259077412f-kube-api-access-4czf6\") pod \"nova-api-b12b-account-create-update-26q6g\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.670192 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac832e52-24a1-4e72-84ad-47259077412f-operator-scripts\") pod \"nova-api-b12b-account-create-update-26q6g\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.670261 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da048a5a-7760-4892-8b7d-8496bcba1142-operator-scripts\") pod \"nova-cell0-db-create-lzg8p\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.670315 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxtqd\" (UniqueName: \"kubernetes.io/projected/da048a5a-7760-4892-8b7d-8496bcba1142-kube-api-access-nxtqd\") pod \"nova-cell0-db-create-lzg8p\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.670379 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbcq\" (UniqueName: \"kubernetes.io/projected/26491a11-ebc3-4790-af52-2459d18e0a5b-kube-api-access-dwbcq\") pod \"nova-api-db-create-s9gw6\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.677226 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26491a11-ebc3-4790-af52-2459d18e0a5b-operator-scripts\") pod \"nova-api-db-create-s9gw6\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.689621 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbcq\" (UniqueName: \"kubernetes.io/projected/26491a11-ebc3-4790-af52-2459d18e0a5b-kube-api-access-dwbcq\") pod \"nova-api-db-create-s9gw6\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.742758 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2n7dx"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.744002 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.752931 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.771190 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2641-account-create-update-42tfb"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.772369 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.772809 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac832e52-24a1-4e72-84ad-47259077412f-operator-scripts\") pod \"nova-api-b12b-account-create-update-26q6g\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.772856 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da048a5a-7760-4892-8b7d-8496bcba1142-operator-scripts\") pod \"nova-cell0-db-create-lzg8p\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.773341 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxtqd\" (UniqueName: \"kubernetes.io/projected/da048a5a-7760-4892-8b7d-8496bcba1142-kube-api-access-nxtqd\") pod \"nova-cell0-db-create-lzg8p\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.773813 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4czf6\" (UniqueName: \"kubernetes.io/projected/ac832e52-24a1-4e72-84ad-47259077412f-kube-api-access-4czf6\") pod \"nova-api-b12b-account-create-update-26q6g\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.774005 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da048a5a-7760-4892-8b7d-8496bcba1142-operator-scripts\") pod \"nova-cell0-db-create-lzg8p\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.773668 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac832e52-24a1-4e72-84ad-47259077412f-operator-scripts\") pod \"nova-api-b12b-account-create-update-26q6g\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.787277 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.796059 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4czf6\" (UniqueName: \"kubernetes.io/projected/ac832e52-24a1-4e72-84ad-47259077412f-kube-api-access-4czf6\") pod \"nova-api-b12b-account-create-update-26q6g\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.805172 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2n7dx"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.807142 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxtqd\" (UniqueName: \"kubernetes.io/projected/da048a5a-7760-4892-8b7d-8496bcba1142-kube-api-access-nxtqd\") pod \"nova-cell0-db-create-lzg8p\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.815698 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2641-account-create-update-42tfb"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.878362 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmlnk\" (UniqueName: \"kubernetes.io/projected/3db11313-933b-4905-acd9-47c95d3014eb-kube-api-access-mmlnk\") pod \"nova-cell1-db-create-2n7dx\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.878452 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwr9n\" (UniqueName: \"kubernetes.io/projected/bc006dae-0e38-4f4a-b689-40cfa0bacf72-kube-api-access-nwr9n\") pod \"nova-cell0-2641-account-create-update-42tfb\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.878517 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc006dae-0e38-4f4a-b689-40cfa0bacf72-operator-scripts\") pod \"nova-cell0-2641-account-create-update-42tfb\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.878628 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db11313-933b-4905-acd9-47c95d3014eb-operator-scripts\") pod \"nova-cell1-db-create-2n7dx\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.965198 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ccbf-account-create-update-z9clm"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.967706 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.971292 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.981885 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db11313-933b-4905-acd9-47c95d3014eb-operator-scripts\") pod \"nova-cell1-db-create-2n7dx\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.981985 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmlnk\" (UniqueName: \"kubernetes.io/projected/3db11313-933b-4905-acd9-47c95d3014eb-kube-api-access-mmlnk\") pod \"nova-cell1-db-create-2n7dx\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.982043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwr9n\" (UniqueName: \"kubernetes.io/projected/bc006dae-0e38-4f4a-b689-40cfa0bacf72-kube-api-access-nwr9n\") pod \"nova-cell0-2641-account-create-update-42tfb\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.982117 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc006dae-0e38-4f4a-b689-40cfa0bacf72-operator-scripts\") pod \"nova-cell0-2641-account-create-update-42tfb\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.983565 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ccbf-account-create-update-z9clm"] Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.986468 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db11313-933b-4905-acd9-47c95d3014eb-operator-scripts\") pod \"nova-cell1-db-create-2n7dx\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:40 crc kubenswrapper[4900]: I1202 14:04:40.990969 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc006dae-0e38-4f4a-b689-40cfa0bacf72-operator-scripts\") pod \"nova-cell0-2641-account-create-update-42tfb\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.004960 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmlnk\" (UniqueName: \"kubernetes.io/projected/3db11313-933b-4905-acd9-47c95d3014eb-kube-api-access-mmlnk\") pod \"nova-cell1-db-create-2n7dx\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.006790 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwr9n\" (UniqueName: \"kubernetes.io/projected/bc006dae-0e38-4f4a-b689-40cfa0bacf72-kube-api-access-nwr9n\") pod \"nova-cell0-2641-account-create-update-42tfb\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.042454 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.055042 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.088408 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbfzm\" (UniqueName: \"kubernetes.io/projected/6689f553-a564-4dfa-982a-bedd8787e343-kube-api-access-zbfzm\") pod \"nova-cell1-ccbf-account-create-update-z9clm\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.088788 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6689f553-a564-4dfa-982a-bedd8787e343-operator-scripts\") pod \"nova-cell1-ccbf-account-create-update-z9clm\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.104860 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.104952 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.140234 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c49f994e-0d9a-4312-995d-d84d93f31f01","Type":"ContainerStarted","Data":"75b061719509895544c5101526474e3593e712a83ea8bf19f5d39b1e05838e7d"} Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.159089 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" event={"ID":"00825975-35eb-46d6-8aeb-753170564467","Type":"ContainerStarted","Data":"abf93b4af2ac9dd4692589722eae2b650829168ac6beae85279f3137d14413fd"} Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.159132 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" event={"ID":"00825975-35eb-46d6-8aeb-753170564467","Type":"ContainerStarted","Data":"c51378899bf00d1d35448264548d97a29edb696e23a0c7d82dc9c0e5f31cabc0"} Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.163848 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.163836861 podStartE2EDuration="3.163836861s" podCreationTimestamp="2025-12-02 14:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:41.157296707 +0000 UTC m=+1326.573110568" watchObservedRunningTime="2025-12-02 14:04:41.163836861 +0000 UTC m=+1326.579650702" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.190972 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6689f553-a564-4dfa-982a-bedd8787e343-operator-scripts\") pod \"nova-cell1-ccbf-account-create-update-z9clm\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.191076 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbfzm\" (UniqueName: \"kubernetes.io/projected/6689f553-a564-4dfa-982a-bedd8787e343-kube-api-access-zbfzm\") pod \"nova-cell1-ccbf-account-create-update-z9clm\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.192730 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6689f553-a564-4dfa-982a-bedd8787e343-operator-scripts\") pod \"nova-cell1-ccbf-account-create-update-z9clm\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.211963 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbfzm\" (UniqueName: \"kubernetes.io/projected/6689f553-a564-4dfa-982a-bedd8787e343-kube-api-access-zbfzm\") pod \"nova-cell1-ccbf-account-create-update-z9clm\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.299124 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.350272 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s9gw6"] Dec 02 14:04:41 crc kubenswrapper[4900]: I1202 14:04:41.765937 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lzg8p"] Dec 02 14:04:41 crc kubenswrapper[4900]: W1202 14:04:41.799844 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda048a5a_7760_4892_8b7d_8496bcba1142.slice/crio-8861f6e448d58bfb034652baa4c92cf36051c81f75c7f125d3f36273497b90fc WatchSource:0}: Error finding container 8861f6e448d58bfb034652baa4c92cf36051c81f75c7f125d3f36273497b90fc: Status 404 returned error can't find the container with id 8861f6e448d58bfb034652baa4c92cf36051c81f75c7f125d3f36273497b90fc Dec 02 14:04:42 crc kubenswrapper[4900]: W1202 14:04:42.037902 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac832e52_24a1_4e72_84ad_47259077412f.slice/crio-a0278e7ada484402a4f403d2b1d79dcb220d0ab1d88c62de06bd63ea5e763554 WatchSource:0}: Error finding container a0278e7ada484402a4f403d2b1d79dcb220d0ab1d88c62de06bd63ea5e763554: Status 404 returned error can't find the container with id a0278e7ada484402a4f403d2b1d79dcb220d0ab1d88c62de06bd63ea5e763554 Dec 02 14:04:42 crc kubenswrapper[4900]: W1202 14:04:42.041698 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db11313_933b_4905_acd9_47c95d3014eb.slice/crio-6f31f0d0ce838b207b3c405c8e607fc617fa9db59290be4bc92b42db030d39b7 WatchSource:0}: Error finding container 6f31f0d0ce838b207b3c405c8e607fc617fa9db59290be4bc92b42db030d39b7: Status 404 returned error can't find the container with id 6f31f0d0ce838b207b3c405c8e607fc617fa9db59290be4bc92b42db030d39b7 Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.042869 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b12b-account-create-update-26q6g"] Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.051884 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2n7dx"] Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.129770 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ccbf-account-create-update-z9clm"] Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.146708 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2641-account-create-update-42tfb"] Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.189753 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2n7dx" event={"ID":"3db11313-933b-4905-acd9-47c95d3014eb","Type":"ContainerStarted","Data":"6f31f0d0ce838b207b3c405c8e607fc617fa9db59290be4bc92b42db030d39b7"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.199858 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b12b-account-create-update-26q6g" event={"ID":"ac832e52-24a1-4e72-84ad-47259077412f","Type":"ContainerStarted","Data":"a0278e7ada484402a4f403d2b1d79dcb220d0ab1d88c62de06bd63ea5e763554"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.204744 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lzg8p" event={"ID":"da048a5a-7760-4892-8b7d-8496bcba1142","Type":"ContainerStarted","Data":"add50ad9f64a2bdf1221d6b96af94f563bea1cd4fc5cd462eed8e1f992bf323e"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.204786 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lzg8p" event={"ID":"da048a5a-7760-4892-8b7d-8496bcba1142","Type":"ContainerStarted","Data":"8861f6e448d58bfb034652baa4c92cf36051c81f75c7f125d3f36273497b90fc"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.221237 4900 generic.go:334] "Generic (PLEG): container finished" podID="26491a11-ebc3-4790-af52-2459d18e0a5b" containerID="cecd536d3b91c6e7b5527b0076daff8705116d48966a3a954f574f3ce05fbc95" exitCode=0 Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.221317 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s9gw6" event={"ID":"26491a11-ebc3-4790-af52-2459d18e0a5b","Type":"ContainerDied","Data":"cecd536d3b91c6e7b5527b0076daff8705116d48966a3a954f574f3ce05fbc95"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.221344 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s9gw6" event={"ID":"26491a11-ebc3-4790-af52-2459d18e0a5b","Type":"ContainerStarted","Data":"61bf50ee76b14c22b90c1a6ef57d92ccece6faf83e136e1a2d0a3c75315ee0cf"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.223183 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lzg8p" podStartSLOduration=2.223172058 podStartE2EDuration="2.223172058s" podCreationTimestamp="2025-12-02 14:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:42.219756642 +0000 UTC m=+1327.635570493" watchObservedRunningTime="2025-12-02 14:04:42.223172058 +0000 UTC m=+1327.638985909" Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.229763 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2641-account-create-update-42tfb" event={"ID":"bc006dae-0e38-4f4a-b689-40cfa0bacf72","Type":"ContainerStarted","Data":"1887383afc53326816024c16f80239726727573bb586d2643ab17472d3a23b62"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.252783 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" event={"ID":"6689f553-a564-4dfa-982a-bedd8787e343","Type":"ContainerStarted","Data":"cf9c095c601470e14c25ca84b86c9bf419bf4b44c88c51bbaf84589de74cc672"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.267993 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" event={"ID":"00825975-35eb-46d6-8aeb-753170564467","Type":"ContainerStarted","Data":"17feb893704561d9ac1181affdd466264eef60e1c8a36bef9d6bfe975eaed6b6"} Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.268039 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.268059 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.289239 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" podStartSLOduration=3.289221619 podStartE2EDuration="3.289221619s" podCreationTimestamp="2025-12-02 14:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:42.287256944 +0000 UTC m=+1327.703070795" watchObservedRunningTime="2025-12-02 14:04:42.289221619 +0000 UTC m=+1327.705035470" Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.925172 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.925696 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-central-agent" containerID="cri-o://3504e1e69825f85aa3759da81f54ec9e34ea36cd6893dc197013299a5cd1ff75" gracePeriod=30 Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.925797 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-notification-agent" containerID="cri-o://863afa87ccdad02084c44780deb2d0531ba616fca5c768a7bf95fcaecd10bc57" gracePeriod=30 Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.925823 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="sg-core" containerID="cri-o://d7a6e6e60fd08480f13d8ae6bd4325ac1afbde227364b9dcab543204bdda319d" gracePeriod=30 Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.925870 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="proxy-httpd" containerID="cri-o://14ebf9194e3548cc1e22d6dc2af7a081c7df02d615a2ca3b6660ba75e2e489c2" gracePeriod=30 Dec 02 14:04:42 crc kubenswrapper[4900]: I1202 14:04:42.970873 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.024862 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.280806 4900 generic.go:334] "Generic (PLEG): container finished" podID="da048a5a-7760-4892-8b7d-8496bcba1142" containerID="add50ad9f64a2bdf1221d6b96af94f563bea1cd4fc5cd462eed8e1f992bf323e" exitCode=0 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.280846 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lzg8p" event={"ID":"da048a5a-7760-4892-8b7d-8496bcba1142","Type":"ContainerDied","Data":"add50ad9f64a2bdf1221d6b96af94f563bea1cd4fc5cd462eed8e1f992bf323e"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.286193 4900 generic.go:334] "Generic (PLEG): container finished" podID="bc006dae-0e38-4f4a-b689-40cfa0bacf72" containerID="2f05d6a11986157bd558c9b03bfdb6ec1225bad183c6930efe3b31debce11410" exitCode=0 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.286245 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2641-account-create-update-42tfb" event={"ID":"bc006dae-0e38-4f4a-b689-40cfa0bacf72","Type":"ContainerDied","Data":"2f05d6a11986157bd558c9b03bfdb6ec1225bad183c6930efe3b31debce11410"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.290498 4900 generic.go:334] "Generic (PLEG): container finished" podID="6689f553-a564-4dfa-982a-bedd8787e343" containerID="8200837b67485781a599b717c166bbbea97823d4fef16a94bd63ede91ddd6205" exitCode=0 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.290585 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" event={"ID":"6689f553-a564-4dfa-982a-bedd8787e343","Type":"ContainerDied","Data":"8200837b67485781a599b717c166bbbea97823d4fef16a94bd63ede91ddd6205"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.298887 4900 generic.go:334] "Generic (PLEG): container finished" podID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerID="14ebf9194e3548cc1e22d6dc2af7a081c7df02d615a2ca3b6660ba75e2e489c2" exitCode=0 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.298917 4900 generic.go:334] "Generic (PLEG): container finished" podID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerID="d7a6e6e60fd08480f13d8ae6bd4325ac1afbde227364b9dcab543204bdda319d" exitCode=2 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.298959 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerDied","Data":"14ebf9194e3548cc1e22d6dc2af7a081c7df02d615a2ca3b6660ba75e2e489c2"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.298985 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerDied","Data":"d7a6e6e60fd08480f13d8ae6bd4325ac1afbde227364b9dcab543204bdda319d"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.307099 4900 generic.go:334] "Generic (PLEG): container finished" podID="3db11313-933b-4905-acd9-47c95d3014eb" containerID="0ebf754234916bfca27372570d7a573ae167129a9fe1251644b771d38e0378ca" exitCode=0 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.307233 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2n7dx" event={"ID":"3db11313-933b-4905-acd9-47c95d3014eb","Type":"ContainerDied","Data":"0ebf754234916bfca27372570d7a573ae167129a9fe1251644b771d38e0378ca"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.322951 4900 generic.go:334] "Generic (PLEG): container finished" podID="ac832e52-24a1-4e72-84ad-47259077412f" containerID="5e8d8d1fb36f3a6f727fa7460d868495ea6d53445bd33ed40121a782230a4712" exitCode=0 Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.323794 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b12b-account-create-update-26q6g" event={"ID":"ac832e52-24a1-4e72-84ad-47259077412f","Type":"ContainerDied","Data":"5e8d8d1fb36f3a6f727fa7460d868495ea6d53445bd33ed40121a782230a4712"} Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.690593 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.696417 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.782444 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26491a11-ebc3-4790-af52-2459d18e0a5b-operator-scripts\") pod \"26491a11-ebc3-4790-af52-2459d18e0a5b\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.782669 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbcq\" (UniqueName: \"kubernetes.io/projected/26491a11-ebc3-4790-af52-2459d18e0a5b-kube-api-access-dwbcq\") pod \"26491a11-ebc3-4790-af52-2459d18e0a5b\" (UID: \"26491a11-ebc3-4790-af52-2459d18e0a5b\") " Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.784407 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26491a11-ebc3-4790-af52-2459d18e0a5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26491a11-ebc3-4790-af52-2459d18e0a5b" (UID: "26491a11-ebc3-4790-af52-2459d18e0a5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.797813 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26491a11-ebc3-4790-af52-2459d18e0a5b-kube-api-access-dwbcq" (OuterVolumeSpecName: "kube-api-access-dwbcq") pod "26491a11-ebc3-4790-af52-2459d18e0a5b" (UID: "26491a11-ebc3-4790-af52-2459d18e0a5b"). InnerVolumeSpecName "kube-api-access-dwbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.885782 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbcq\" (UniqueName: \"kubernetes.io/projected/26491a11-ebc3-4790-af52-2459d18e0a5b-kube-api-access-dwbcq\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:43 crc kubenswrapper[4900]: I1202 14:04:43.885813 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26491a11-ebc3-4790-af52-2459d18e0a5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:44 crc kubenswrapper[4900]: I1202 14:04:44.340568 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9gw6" Dec 02 14:04:44 crc kubenswrapper[4900]: I1202 14:04:44.340567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s9gw6" event={"ID":"26491a11-ebc3-4790-af52-2459d18e0a5b","Type":"ContainerDied","Data":"61bf50ee76b14c22b90c1a6ef57d92ccece6faf83e136e1a2d0a3c75315ee0cf"} Dec 02 14:04:44 crc kubenswrapper[4900]: I1202 14:04:44.340714 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61bf50ee76b14c22b90c1a6ef57d92ccece6faf83e136e1a2d0a3c75315ee0cf" Dec 02 14:04:44 crc kubenswrapper[4900]: I1202 14:04:44.351335 4900 generic.go:334] "Generic (PLEG): container finished" podID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerID="3504e1e69825f85aa3759da81f54ec9e34ea36cd6893dc197013299a5cd1ff75" exitCode=0 Dec 02 14:04:44 crc kubenswrapper[4900]: I1202 14:04:44.351579 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerDied","Data":"3504e1e69825f85aa3759da81f54ec9e34ea36cd6893dc197013299a5cd1ff75"} Dec 02 14:04:45 crc kubenswrapper[4900]: I1202 14:04:45.117099 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:04:45 crc kubenswrapper[4900]: I1202 14:04:45.117385 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:04:46 crc kubenswrapper[4900]: I1202 14:04:46.985669 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:47 crc kubenswrapper[4900]: I1202 14:04:47.389602 4900 generic.go:334] "Generic (PLEG): container finished" podID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerID="863afa87ccdad02084c44780deb2d0531ba616fca5c768a7bf95fcaecd10bc57" exitCode=0 Dec 02 14:04:47 crc kubenswrapper[4900]: I1202 14:04:47.389657 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerDied","Data":"863afa87ccdad02084c44780deb2d0531ba616fca5c768a7bf95fcaecd10bc57"} Dec 02 14:04:48 crc kubenswrapper[4900]: I1202 14:04:48.877801 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 14:04:49 crc kubenswrapper[4900]: I1202 14:04:49.293196 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:04:49 crc kubenswrapper[4900]: I1202 14:04:49.353258 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6546f8c848-792t5"] Dec 02 14:04:49 crc kubenswrapper[4900]: I1202 14:04:49.360660 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6546f8c848-792t5" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-api" containerID="cri-o://99e006ed89c5e5fa44bcadfaecbda91fa480e0524e0de1fae422e487c980d8d1" gracePeriod=30 Dec 02 14:04:49 crc kubenswrapper[4900]: I1202 14:04:49.361285 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6546f8c848-792t5" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-httpd" containerID="cri-o://06d9d69c290fd56f12d48c09442b22ccd9d32e63a9fc8add3b2502eb112d4a4d" gracePeriod=30 Dec 02 14:04:49 crc kubenswrapper[4900]: I1202 14:04:49.835941 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:49 crc kubenswrapper[4900]: I1202 14:04:49.840390 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:04:50 crc kubenswrapper[4900]: I1202 14:04:50.432698 4900 generic.go:334] "Generic (PLEG): container finished" podID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerID="06d9d69c290fd56f12d48c09442b22ccd9d32e63a9fc8add3b2502eb112d4a4d" exitCode=0 Dec 02 14:04:50 crc kubenswrapper[4900]: I1202 14:04:50.433547 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6546f8c848-792t5" event={"ID":"41d88fa5-0aa7-4ab7-8089-e073efd31ff0","Type":"ContainerDied","Data":"06d9d69c290fd56f12d48c09442b22ccd9d32e63a9fc8add3b2502eb112d4a4d"} Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.862715 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.863490 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.905104 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.922880 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.939226 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945039 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbfzm\" (UniqueName: \"kubernetes.io/projected/6689f553-a564-4dfa-982a-bedd8787e343-kube-api-access-zbfzm\") pod \"6689f553-a564-4dfa-982a-bedd8787e343\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945083 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc006dae-0e38-4f4a-b689-40cfa0bacf72-operator-scripts\") pod \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945155 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxtqd\" (UniqueName: \"kubernetes.io/projected/da048a5a-7760-4892-8b7d-8496bcba1142-kube-api-access-nxtqd\") pod \"da048a5a-7760-4892-8b7d-8496bcba1142\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945179 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da048a5a-7760-4892-8b7d-8496bcba1142-operator-scripts\") pod \"da048a5a-7760-4892-8b7d-8496bcba1142\" (UID: \"da048a5a-7760-4892-8b7d-8496bcba1142\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945205 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6689f553-a564-4dfa-982a-bedd8787e343-operator-scripts\") pod \"6689f553-a564-4dfa-982a-bedd8787e343\" (UID: \"6689f553-a564-4dfa-982a-bedd8787e343\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945239 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db11313-933b-4905-acd9-47c95d3014eb-operator-scripts\") pod \"3db11313-933b-4905-acd9-47c95d3014eb\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945257 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmlnk\" (UniqueName: \"kubernetes.io/projected/3db11313-933b-4905-acd9-47c95d3014eb-kube-api-access-mmlnk\") pod \"3db11313-933b-4905-acd9-47c95d3014eb\" (UID: \"3db11313-933b-4905-acd9-47c95d3014eb\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.945281 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwr9n\" (UniqueName: \"kubernetes.io/projected/bc006dae-0e38-4f4a-b689-40cfa0bacf72-kube-api-access-nwr9n\") pod \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\" (UID: \"bc006dae-0e38-4f4a-b689-40cfa0bacf72\") " Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.946257 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db11313-933b-4905-acd9-47c95d3014eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3db11313-933b-4905-acd9-47c95d3014eb" (UID: "3db11313-933b-4905-acd9-47c95d3014eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.947379 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3db11313-933b-4905-acd9-47c95d3014eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.950628 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da048a5a-7760-4892-8b7d-8496bcba1142-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da048a5a-7760-4892-8b7d-8496bcba1142" (UID: "da048a5a-7760-4892-8b7d-8496bcba1142"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.951045 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6689f553-a564-4dfa-982a-bedd8787e343-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6689f553-a564-4dfa-982a-bedd8787e343" (UID: "6689f553-a564-4dfa-982a-bedd8787e343"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.954004 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc006dae-0e38-4f4a-b689-40cfa0bacf72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc006dae-0e38-4f4a-b689-40cfa0bacf72" (UID: "bc006dae-0e38-4f4a-b689-40cfa0bacf72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.968317 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc006dae-0e38-4f4a-b689-40cfa0bacf72-kube-api-access-nwr9n" (OuterVolumeSpecName: "kube-api-access-nwr9n") pod "bc006dae-0e38-4f4a-b689-40cfa0bacf72" (UID: "bc006dae-0e38-4f4a-b689-40cfa0bacf72"). InnerVolumeSpecName "kube-api-access-nwr9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.968931 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db11313-933b-4905-acd9-47c95d3014eb-kube-api-access-mmlnk" (OuterVolumeSpecName: "kube-api-access-mmlnk") pod "3db11313-933b-4905-acd9-47c95d3014eb" (UID: "3db11313-933b-4905-acd9-47c95d3014eb"). InnerVolumeSpecName "kube-api-access-mmlnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.970527 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da048a5a-7760-4892-8b7d-8496bcba1142-kube-api-access-nxtqd" (OuterVolumeSpecName: "kube-api-access-nxtqd") pod "da048a5a-7760-4892-8b7d-8496bcba1142" (UID: "da048a5a-7760-4892-8b7d-8496bcba1142"). InnerVolumeSpecName "kube-api-access-nxtqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:51 crc kubenswrapper[4900]: I1202 14:04:51.973531 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6689f553-a564-4dfa-982a-bedd8787e343-kube-api-access-zbfzm" (OuterVolumeSpecName: "kube-api-access-zbfzm") pod "6689f553-a564-4dfa-982a-bedd8787e343" (UID: "6689f553-a564-4dfa-982a-bedd8787e343"). InnerVolumeSpecName "kube-api-access-zbfzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.048569 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac832e52-24a1-4e72-84ad-47259077412f-operator-scripts\") pod \"ac832e52-24a1-4e72-84ad-47259077412f\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.048806 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4czf6\" (UniqueName: \"kubernetes.io/projected/ac832e52-24a1-4e72-84ad-47259077412f-kube-api-access-4czf6\") pod \"ac832e52-24a1-4e72-84ad-47259077412f\" (UID: \"ac832e52-24a1-4e72-84ad-47259077412f\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049162 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac832e52-24a1-4e72-84ad-47259077412f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac832e52-24a1-4e72-84ad-47259077412f" (UID: "ac832e52-24a1-4e72-84ad-47259077412f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049508 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbfzm\" (UniqueName: \"kubernetes.io/projected/6689f553-a564-4dfa-982a-bedd8787e343-kube-api-access-zbfzm\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049528 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc006dae-0e38-4f4a-b689-40cfa0bacf72-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049540 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxtqd\" (UniqueName: \"kubernetes.io/projected/da048a5a-7760-4892-8b7d-8496bcba1142-kube-api-access-nxtqd\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049549 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac832e52-24a1-4e72-84ad-47259077412f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049576 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da048a5a-7760-4892-8b7d-8496bcba1142-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049586 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6689f553-a564-4dfa-982a-bedd8787e343-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049594 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmlnk\" (UniqueName: \"kubernetes.io/projected/3db11313-933b-4905-acd9-47c95d3014eb-kube-api-access-mmlnk\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.049604 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwr9n\" (UniqueName: \"kubernetes.io/projected/bc006dae-0e38-4f4a-b689-40cfa0bacf72-kube-api-access-nwr9n\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.054560 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac832e52-24a1-4e72-84ad-47259077412f-kube-api-access-4czf6" (OuterVolumeSpecName: "kube-api-access-4czf6") pod "ac832e52-24a1-4e72-84ad-47259077412f" (UID: "ac832e52-24a1-4e72-84ad-47259077412f"). InnerVolumeSpecName "kube-api-access-4czf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.059002 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.150887 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bns74\" (UniqueName: \"kubernetes.io/projected/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-kube-api-access-bns74\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.150972 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-run-httpd\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.151034 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-scripts\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.151096 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-config-data\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.151202 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-log-httpd\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.151221 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-sg-core-conf-yaml\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.151301 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-combined-ca-bundle\") pod \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\" (UID: \"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73\") " Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.151735 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4czf6\" (UniqueName: \"kubernetes.io/projected/ac832e52-24a1-4e72-84ad-47259077412f-kube-api-access-4czf6\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.152435 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.152524 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.155294 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-scripts" (OuterVolumeSpecName: "scripts") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.158860 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-kube-api-access-bns74" (OuterVolumeSpecName: "kube-api-access-bns74") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "kube-api-access-bns74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.186951 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.229890 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-config-data" (OuterVolumeSpecName: "config-data") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.237600 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" (UID: "c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253312 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253351 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253365 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253379 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bns74\" (UniqueName: \"kubernetes.io/projected/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-kube-api-access-bns74\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253390 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253404 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.253414 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.453118 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2n7dx" event={"ID":"3db11313-933b-4905-acd9-47c95d3014eb","Type":"ContainerDied","Data":"6f31f0d0ce838b207b3c405c8e607fc617fa9db59290be4bc92b42db030d39b7"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.453171 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f31f0d0ce838b207b3c405c8e607fc617fa9db59290be4bc92b42db030d39b7" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.453237 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2n7dx" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.459519 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b12b-account-create-update-26q6g" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.459523 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b12b-account-create-update-26q6g" event={"ID":"ac832e52-24a1-4e72-84ad-47259077412f","Type":"ContainerDied","Data":"a0278e7ada484402a4f403d2b1d79dcb220d0ab1d88c62de06bd63ea5e763554"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.459681 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0278e7ada484402a4f403d2b1d79dcb220d0ab1d88c62de06bd63ea5e763554" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.461634 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lzg8p" event={"ID":"da048a5a-7760-4892-8b7d-8496bcba1142","Type":"ContainerDied","Data":"8861f6e448d58bfb034652baa4c92cf36051c81f75c7f125d3f36273497b90fc"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.461705 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8861f6e448d58bfb034652baa4c92cf36051c81f75c7f125d3f36273497b90fc" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.461725 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lzg8p" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.465261 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73","Type":"ContainerDied","Data":"36e7d9682997a0ea4fa05e7beb45a2c623fa16bb090cec86e3c3cc7f8c88aed7"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.465327 4900 scope.go:117] "RemoveContainer" containerID="14ebf9194e3548cc1e22d6dc2af7a081c7df02d615a2ca3b6660ba75e2e489c2" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.465362 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.472882 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2641-account-create-update-42tfb" event={"ID":"bc006dae-0e38-4f4a-b689-40cfa0bacf72","Type":"ContainerDied","Data":"1887383afc53326816024c16f80239726727573bb586d2643ab17472d3a23b62"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.472942 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1887383afc53326816024c16f80239726727573bb586d2643ab17472d3a23b62" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.473026 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2641-account-create-update-42tfb" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.478163 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" event={"ID":"6689f553-a564-4dfa-982a-bedd8787e343","Type":"ContainerDied","Data":"cf9c095c601470e14c25ca84b86c9bf419bf4b44c88c51bbaf84589de74cc672"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.478209 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9c095c601470e14c25ca84b86c9bf419bf4b44c88c51bbaf84589de74cc672" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.478265 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ccbf-account-create-update-z9clm" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.481178 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6ff6dcaf-b619-4169-9b36-81ee92264d71","Type":"ContainerStarted","Data":"6c10ca3ad8fceefa6060a64015e329b0a3d68e6291a95453dad98596043ab0ef"} Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.503796 4900 scope.go:117] "RemoveContainer" containerID="d7a6e6e60fd08480f13d8ae6bd4325ac1afbde227364b9dcab543204bdda319d" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.571748 4900 scope.go:117] "RemoveContainer" containerID="863afa87ccdad02084c44780deb2d0531ba616fca5c768a7bf95fcaecd10bc57" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.572019 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.585733 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596429 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596851 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc006dae-0e38-4f4a-b689-40cfa0bacf72" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596867 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc006dae-0e38-4f4a-b689-40cfa0bacf72" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596888 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6689f553-a564-4dfa-982a-bedd8787e343" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596896 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6689f553-a564-4dfa-982a-bedd8787e343" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596903 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="proxy-httpd" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596911 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="proxy-httpd" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596923 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="sg-core" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596929 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="sg-core" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596941 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26491a11-ebc3-4790-af52-2459d18e0a5b" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596947 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="26491a11-ebc3-4790-af52-2459d18e0a5b" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596961 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da048a5a-7760-4892-8b7d-8496bcba1142" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596968 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="da048a5a-7760-4892-8b7d-8496bcba1142" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596982 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db11313-933b-4905-acd9-47c95d3014eb" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.596988 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db11313-933b-4905-acd9-47c95d3014eb" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.596998 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-notification-agent" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.597005 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-notification-agent" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.597020 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-central-agent" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.597025 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-central-agent" Dec 02 14:04:52 crc kubenswrapper[4900]: E1202 14:04:52.597033 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac832e52-24a1-4e72-84ad-47259077412f" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.597038 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac832e52-24a1-4e72-84ad-47259077412f" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641006 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db11313-933b-4905-acd9-47c95d3014eb" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641083 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac832e52-24a1-4e72-84ad-47259077412f" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641111 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="sg-core" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641125 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="proxy-httpd" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641140 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="26491a11-ebc3-4790-af52-2459d18e0a5b" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641149 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="da048a5a-7760-4892-8b7d-8496bcba1142" containerName="mariadb-database-create" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641173 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-notification-agent" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641200 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" containerName="ceilometer-central-agent" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641228 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6689f553-a564-4dfa-982a-bedd8787e343" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.641246 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc006dae-0e38-4f4a-b689-40cfa0bacf72" containerName="mariadb-account-create-update" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.644360 4900 scope.go:117] "RemoveContainer" containerID="3504e1e69825f85aa3759da81f54ec9e34ea36cd6893dc197013299a5cd1ff75" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.645806 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.645927 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.650876 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.650987 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671242 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-run-httpd\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671312 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-log-httpd\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671356 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671436 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-scripts\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671490 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgfz\" (UniqueName: \"kubernetes.io/projected/561fbf10-b974-4f35-a11e-794b31a062d8-kube-api-access-mvgfz\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671541 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-config-data\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.671564 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772309 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-log-httpd\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772676 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772737 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-scripts\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772780 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgfz\" (UniqueName: \"kubernetes.io/projected/561fbf10-b974-4f35-a11e-794b31a062d8-kube-api-access-mvgfz\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772819 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-config-data\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772842 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772873 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-run-httpd\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.772900 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-log-httpd\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.773216 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-run-httpd\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.785986 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.786057 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-scripts\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.786082 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.789952 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgfz\" (UniqueName: \"kubernetes.io/projected/561fbf10-b974-4f35-a11e-794b31a062d8-kube-api-access-mvgfz\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.793865 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-config-data\") pod \"ceilometer-0\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " pod="openstack/ceilometer-0" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.922557 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73" path="/var/lib/kubelet/pods/c77cc0c8-bd3f-46e6-84fc-0d8d329e3f73/volumes" Dec 02 14:04:52 crc kubenswrapper[4900]: I1202 14:04:52.970241 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:53 crc kubenswrapper[4900]: W1202 14:04:53.425415 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod561fbf10_b974_4f35_a11e_794b31a062d8.slice/crio-1030bc15396ba17725d2f77ea72417a503a6538f21afb065ee6d30833cf309ad WatchSource:0}: Error finding container 1030bc15396ba17725d2f77ea72417a503a6538f21afb065ee6d30833cf309ad: Status 404 returned error can't find the container with id 1030bc15396ba17725d2f77ea72417a503a6538f21afb065ee6d30833cf309ad Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.427196 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.496696 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerStarted","Data":"1030bc15396ba17725d2f77ea72417a503a6538f21afb065ee6d30833cf309ad"} Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.501811 4900 generic.go:334] "Generic (PLEG): container finished" podID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerID="99e006ed89c5e5fa44bcadfaecbda91fa480e0524e0de1fae422e487c980d8d1" exitCode=0 Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.501926 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6546f8c848-792t5" event={"ID":"41d88fa5-0aa7-4ab7-8089-e073efd31ff0","Type":"ContainerDied","Data":"99e006ed89c5e5fa44bcadfaecbda91fa480e0524e0de1fae422e487c980d8d1"} Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.532553 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.773093786 podStartE2EDuration="20.532535245s" podCreationTimestamp="2025-12-02 14:04:33 +0000 UTC" firstStartedPulling="2025-12-02 14:04:34.948486469 +0000 UTC m=+1320.364300320" lastFinishedPulling="2025-12-02 14:04:51.707927888 +0000 UTC m=+1337.123741779" observedRunningTime="2025-12-02 14:04:53.521591046 +0000 UTC m=+1338.937404907" watchObservedRunningTime="2025-12-02 14:04:53.532535245 +0000 UTC m=+1338.948349106" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.740897 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.790407 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gdtz\" (UniqueName: \"kubernetes.io/projected/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-kube-api-access-9gdtz\") pod \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.790480 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-combined-ca-bundle\") pod \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.790534 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-config\") pod \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.790609 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-httpd-config\") pod \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.796567 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "41d88fa5-0aa7-4ab7-8089-e073efd31ff0" (UID: "41d88fa5-0aa7-4ab7-8089-e073efd31ff0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.798855 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-kube-api-access-9gdtz" (OuterVolumeSpecName: "kube-api-access-9gdtz") pod "41d88fa5-0aa7-4ab7-8089-e073efd31ff0" (UID: "41d88fa5-0aa7-4ab7-8089-e073efd31ff0"). InnerVolumeSpecName "kube-api-access-9gdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.875979 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-config" (OuterVolumeSpecName: "config") pod "41d88fa5-0aa7-4ab7-8089-e073efd31ff0" (UID: "41d88fa5-0aa7-4ab7-8089-e073efd31ff0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.892825 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-ovndb-tls-certs\") pod \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\" (UID: \"41d88fa5-0aa7-4ab7-8089-e073efd31ff0\") " Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.893311 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gdtz\" (UniqueName: \"kubernetes.io/projected/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-kube-api-access-9gdtz\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.893332 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.893344 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.895834 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41d88fa5-0aa7-4ab7-8089-e073efd31ff0" (UID: "41d88fa5-0aa7-4ab7-8089-e073efd31ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:53 crc kubenswrapper[4900]: I1202 14:04:53.998267 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.027941 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "41d88fa5-0aa7-4ab7-8089-e073efd31ff0" (UID: "41d88fa5-0aa7-4ab7-8089-e073efd31ff0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.100156 4900 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d88fa5-0aa7-4ab7-8089-e073efd31ff0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.513279 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerStarted","Data":"c4031e0db2eba55c98feabb056dc51cbdb465f52e15710b55b3062b2f8a88e55"} Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.515203 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6546f8c848-792t5" event={"ID":"41d88fa5-0aa7-4ab7-8089-e073efd31ff0","Type":"ContainerDied","Data":"4e1c615b9faf554247f1eab9b7f04d0d40974e476b61ed32c70cb0feec65263a"} Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.515262 4900 scope.go:117] "RemoveContainer" containerID="06d9d69c290fd56f12d48c09442b22ccd9d32e63a9fc8add3b2502eb112d4a4d" Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.515275 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6546f8c848-792t5" Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.546834 4900 scope.go:117] "RemoveContainer" containerID="99e006ed89c5e5fa44bcadfaecbda91fa480e0524e0de1fae422e487c980d8d1" Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.561966 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6546f8c848-792t5"] Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.572176 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6546f8c848-792t5"] Dec 02 14:04:54 crc kubenswrapper[4900]: I1202 14:04:54.932970 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" path="/var/lib/kubelet/pods/41d88fa5-0aa7-4ab7-8089-e073efd31ff0/volumes" Dec 02 14:04:55 crc kubenswrapper[4900]: I1202 14:04:55.526632 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerStarted","Data":"ac66a313ddefbc43d9a48877544852152c3e72fd3a2009b4e0152389e6b320f0"} Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.108626 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bj75w"] Dec 02 14:04:56 crc kubenswrapper[4900]: E1202 14:04:56.109454 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-httpd" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.109481 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-httpd" Dec 02 14:04:56 crc kubenswrapper[4900]: E1202 14:04:56.109505 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-api" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.109517 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-api" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.110156 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-api" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.110178 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d88fa5-0aa7-4ab7-8089-e073efd31ff0" containerName="neutron-httpd" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.110946 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.113404 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h7qq8" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.114445 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.115947 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.117504 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bj75w"] Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.154939 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.155772 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf747\" (UniqueName: \"kubernetes.io/projected/d28b512a-2406-4ad9-a594-7d408b8d3fb6-kube-api-access-qf747\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.156363 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-scripts\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.156504 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-config-data\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.258912 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.259454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf747\" (UniqueName: \"kubernetes.io/projected/d28b512a-2406-4ad9-a594-7d408b8d3fb6-kube-api-access-qf747\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.259533 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-scripts\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.259602 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-config-data\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.264287 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-scripts\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.264418 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-config-data\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.270518 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.282969 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf747\" (UniqueName: \"kubernetes.io/projected/d28b512a-2406-4ad9-a594-7d408b8d3fb6-kube-api-access-qf747\") pod \"nova-cell0-conductor-db-sync-bj75w\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.439325 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.574265 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerStarted","Data":"16b647b077671b39bbb77f2bd625ab4b2797304525bcd5baa95182b3c1b7fd24"} Dec 02 14:04:56 crc kubenswrapper[4900]: I1202 14:04:56.888977 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bj75w"] Dec 02 14:04:56 crc kubenswrapper[4900]: W1202 14:04:56.893016 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd28b512a_2406_4ad9_a594_7d408b8d3fb6.slice/crio-fcab8fba2dd84ee063be7ac90c56f73f8909e3b894d9c602b8c48f4e913f5b8f WatchSource:0}: Error finding container fcab8fba2dd84ee063be7ac90c56f73f8909e3b894d9c602b8c48f4e913f5b8f: Status 404 returned error can't find the container with id fcab8fba2dd84ee063be7ac90c56f73f8909e3b894d9c602b8c48f4e913f5b8f Dec 02 14:04:57 crc kubenswrapper[4900]: I1202 14:04:57.585178 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bj75w" event={"ID":"d28b512a-2406-4ad9-a594-7d408b8d3fb6","Type":"ContainerStarted","Data":"fcab8fba2dd84ee063be7ac90c56f73f8909e3b894d9c602b8c48f4e913f5b8f"} Dec 02 14:04:58 crc kubenswrapper[4900]: I1202 14:04:58.596615 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerStarted","Data":"a927ea97cc7e2a2a52d80777a3d1ab730748d1d97293530d9e901cd232b59400"} Dec 02 14:04:58 crc kubenswrapper[4900]: I1202 14:04:58.597553 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:04:58 crc kubenswrapper[4900]: I1202 14:04:58.631868 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.358986538 podStartE2EDuration="6.63184318s" podCreationTimestamp="2025-12-02 14:04:52 +0000 UTC" firstStartedPulling="2025-12-02 14:04:53.429273074 +0000 UTC m=+1338.845086945" lastFinishedPulling="2025-12-02 14:04:57.702129736 +0000 UTC m=+1343.117943587" observedRunningTime="2025-12-02 14:04:58.619350808 +0000 UTC m=+1344.035164679" watchObservedRunningTime="2025-12-02 14:04:58.63184318 +0000 UTC m=+1344.047657051" Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.261151 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.262209 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-central-agent" containerID="cri-o://c4031e0db2eba55c98feabb056dc51cbdb465f52e15710b55b3062b2f8a88e55" gracePeriod=30 Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.262408 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="proxy-httpd" containerID="cri-o://a927ea97cc7e2a2a52d80777a3d1ab730748d1d97293530d9e901cd232b59400" gracePeriod=30 Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.262749 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-notification-agent" containerID="cri-o://ac66a313ddefbc43d9a48877544852152c3e72fd3a2009b4e0152389e6b320f0" gracePeriod=30 Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.263213 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="sg-core" containerID="cri-o://16b647b077671b39bbb77f2bd625ab4b2797304525bcd5baa95182b3c1b7fd24" gracePeriod=30 Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.640512 4900 generic.go:334] "Generic (PLEG): container finished" podID="561fbf10-b974-4f35-a11e-794b31a062d8" containerID="a927ea97cc7e2a2a52d80777a3d1ab730748d1d97293530d9e901cd232b59400" exitCode=0 Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.640551 4900 generic.go:334] "Generic (PLEG): container finished" podID="561fbf10-b974-4f35-a11e-794b31a062d8" containerID="16b647b077671b39bbb77f2bd625ab4b2797304525bcd5baa95182b3c1b7fd24" exitCode=2 Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.640583 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerDied","Data":"a927ea97cc7e2a2a52d80777a3d1ab730748d1d97293530d9e901cd232b59400"} Dec 02 14:05:02 crc kubenswrapper[4900]: I1202 14:05:02.640614 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerDied","Data":"16b647b077671b39bbb77f2bd625ab4b2797304525bcd5baa95182b3c1b7fd24"} Dec 02 14:05:03 crc kubenswrapper[4900]: I1202 14:05:03.660882 4900 generic.go:334] "Generic (PLEG): container finished" podID="561fbf10-b974-4f35-a11e-794b31a062d8" containerID="ac66a313ddefbc43d9a48877544852152c3e72fd3a2009b4e0152389e6b320f0" exitCode=0 Dec 02 14:05:03 crc kubenswrapper[4900]: I1202 14:05:03.661156 4900 generic.go:334] "Generic (PLEG): container finished" podID="561fbf10-b974-4f35-a11e-794b31a062d8" containerID="c4031e0db2eba55c98feabb056dc51cbdb465f52e15710b55b3062b2f8a88e55" exitCode=0 Dec 02 14:05:03 crc kubenswrapper[4900]: I1202 14:05:03.660948 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerDied","Data":"ac66a313ddefbc43d9a48877544852152c3e72fd3a2009b4e0152389e6b320f0"} Dec 02 14:05:03 crc kubenswrapper[4900]: I1202 14:05:03.661191 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerDied","Data":"c4031e0db2eba55c98feabb056dc51cbdb465f52e15710b55b3062b2f8a88e55"} Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.599827 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.711060 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"561fbf10-b974-4f35-a11e-794b31a062d8","Type":"ContainerDied","Data":"1030bc15396ba17725d2f77ea72417a503a6538f21afb065ee6d30833cf309ad"} Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.711117 4900 scope.go:117] "RemoveContainer" containerID="a927ea97cc7e2a2a52d80777a3d1ab730748d1d97293530d9e901cd232b59400" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.711258 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715291 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-run-httpd\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715440 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-scripts\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715483 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-config-data\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715514 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgfz\" (UniqueName: \"kubernetes.io/projected/561fbf10-b974-4f35-a11e-794b31a062d8-kube-api-access-mvgfz\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715540 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-log-httpd\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715610 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-combined-ca-bundle\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.715673 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-sg-core-conf-yaml\") pod \"561fbf10-b974-4f35-a11e-794b31a062d8\" (UID: \"561fbf10-b974-4f35-a11e-794b31a062d8\") " Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.716316 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.725159 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561fbf10-b974-4f35-a11e-794b31a062d8-kube-api-access-mvgfz" (OuterVolumeSpecName: "kube-api-access-mvgfz") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "kube-api-access-mvgfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.726054 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.726157 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bj75w" event={"ID":"d28b512a-2406-4ad9-a594-7d408b8d3fb6","Type":"ContainerStarted","Data":"7ca56bba94a87eec7b0bc4f9d045dadc3d8e23854ce2eca32ac737bea66f175a"} Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.741846 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-scripts" (OuterVolumeSpecName: "scripts") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.753851 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bj75w" podStartSLOduration=1.254448638 podStartE2EDuration="10.75382819s" podCreationTimestamp="2025-12-02 14:04:56 +0000 UTC" firstStartedPulling="2025-12-02 14:04:56.894990276 +0000 UTC m=+1342.310804117" lastFinishedPulling="2025-12-02 14:05:06.394369818 +0000 UTC m=+1351.810183669" observedRunningTime="2025-12-02 14:05:06.747176382 +0000 UTC m=+1352.162990233" watchObservedRunningTime="2025-12-02 14:05:06.75382819 +0000 UTC m=+1352.169642041" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.783833 4900 scope.go:117] "RemoveContainer" containerID="16b647b077671b39bbb77f2bd625ab4b2797304525bcd5baa95182b3c1b7fd24" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.793296 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.806517 4900 scope.go:117] "RemoveContainer" containerID="ac66a313ddefbc43d9a48877544852152c3e72fd3a2009b4e0152389e6b320f0" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.817910 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.817939 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.817948 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgfz\" (UniqueName: \"kubernetes.io/projected/561fbf10-b974-4f35-a11e-794b31a062d8-kube-api-access-mvgfz\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.817958 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/561fbf10-b974-4f35-a11e-794b31a062d8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.817968 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.827443 4900 scope.go:117] "RemoveContainer" containerID="c4031e0db2eba55c98feabb056dc51cbdb465f52e15710b55b3062b2f8a88e55" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.849764 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.853448 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-config-data" (OuterVolumeSpecName: "config-data") pod "561fbf10-b974-4f35-a11e-794b31a062d8" (UID: "561fbf10-b974-4f35-a11e-794b31a062d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.919787 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:06 crc kubenswrapper[4900]: I1202 14:05:06.919815 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561fbf10-b974-4f35-a11e-794b31a062d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.034310 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.040867 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.060216 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:07 crc kubenswrapper[4900]: E1202 14:05:07.060576 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-notification-agent" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.060594 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-notification-agent" Dec 02 14:05:07 crc kubenswrapper[4900]: E1202 14:05:07.060613 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-central-agent" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.060620 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-central-agent" Dec 02 14:05:07 crc kubenswrapper[4900]: E1202 14:05:07.060942 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="proxy-httpd" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.060957 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="proxy-httpd" Dec 02 14:05:07 crc kubenswrapper[4900]: E1202 14:05:07.061752 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="sg-core" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.061766 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="sg-core" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.061976 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="proxy-httpd" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.062002 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-notification-agent" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.062015 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="ceilometer-central-agent" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.062026 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" containerName="sg-core" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.079510 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.082264 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.082278 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.101069 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225005 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-log-httpd\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225073 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptrr\" (UniqueName: \"kubernetes.io/projected/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-kube-api-access-bptrr\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225127 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-scripts\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225152 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225184 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225212 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-config-data\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.225252 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-run-httpd\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327377 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-log-httpd\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327456 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptrr\" (UniqueName: \"kubernetes.io/projected/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-kube-api-access-bptrr\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327506 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-scripts\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327532 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-config-data\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327618 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-run-httpd\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.327948 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-log-httpd\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.328294 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-run-httpd\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.331914 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.332658 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.333341 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-scripts\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.335805 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-config-data\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.347879 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptrr\" (UniqueName: \"kubernetes.io/projected/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-kube-api-access-bptrr\") pod \"ceilometer-0\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.393163 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:07 crc kubenswrapper[4900]: I1202 14:05:07.873088 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:08 crc kubenswrapper[4900]: I1202 14:05:08.748954 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerStarted","Data":"8b07a97751329fc6a9bf33a5e2fb6c490c584d499d077a73e9fd88631f28c992"} Dec 02 14:05:08 crc kubenswrapper[4900]: I1202 14:05:08.926492 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561fbf10-b974-4f35-a11e-794b31a062d8" path="/var/lib/kubelet/pods/561fbf10-b974-4f35-a11e-794b31a062d8/volumes" Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.096788 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.097029 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-log" containerID="cri-o://8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621" gracePeriod=30 Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.097127 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-httpd" containerID="cri-o://561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf" gracePeriod=30 Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.762197 4900 generic.go:334] "Generic (PLEG): container finished" podID="c9986100-46f5-40b2-b20c-17e127f48575" containerID="8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621" exitCode=143 Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.762331 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9986100-46f5-40b2-b20c-17e127f48575","Type":"ContainerDied","Data":"8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621"} Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.765068 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerStarted","Data":"a3cc0feda20cba2d8349048a59c106d83da53bc79b524b2af49af9e6366a71c7"} Dec 02 14:05:09 crc kubenswrapper[4900]: I1202 14:05:09.765118 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerStarted","Data":"f4fcc034fef4eafcf4d7c6367b6f99ec6cb63494ed51fe851e20f54fcb8def4c"} Dec 02 14:05:10 crc kubenswrapper[4900]: I1202 14:05:10.093280 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:10 crc kubenswrapper[4900]: I1202 14:05:10.094426 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-log" containerID="cri-o://ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6" gracePeriod=30 Dec 02 14:05:10 crc kubenswrapper[4900]: I1202 14:05:10.094539 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-httpd" containerID="cri-o://2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5" gracePeriod=30 Dec 02 14:05:10 crc kubenswrapper[4900]: I1202 14:05:10.787518 4900 generic.go:334] "Generic (PLEG): container finished" podID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerID="ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6" exitCode=143 Dec 02 14:05:10 crc kubenswrapper[4900]: I1202 14:05:10.787698 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25e4b3fb-1235-4d67-b70b-53760e92e6c5","Type":"ContainerDied","Data":"ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6"} Dec 02 14:05:11 crc kubenswrapper[4900]: I1202 14:05:11.713439 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:11 crc kubenswrapper[4900]: I1202 14:05:11.796991 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerStarted","Data":"171f9d4171956244de32ee853759f16afb430ef2c22fee64095ab36afa536b8f"} Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.738414 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.835042 4900 generic.go:334] "Generic (PLEG): container finished" podID="c9986100-46f5-40b2-b20c-17e127f48575" containerID="561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf" exitCode=0 Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.835125 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9986100-46f5-40b2-b20c-17e127f48575","Type":"ContainerDied","Data":"561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf"} Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.835154 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9986100-46f5-40b2-b20c-17e127f48575","Type":"ContainerDied","Data":"31afa1d90ffc3eb3b2811089930a7d4a5d800e2c7a80b1d24fb0bce435318111"} Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.835173 4900 scope.go:117] "RemoveContainer" containerID="561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.835309 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.839707 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerStarted","Data":"270a8e19f73ef7e52adeea79cf31f365d264c7d7e2522765851fc5834c029bbd"} Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.839946 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-central-agent" containerID="cri-o://f4fcc034fef4eafcf4d7c6367b6f99ec6cb63494ed51fe851e20f54fcb8def4c" gracePeriod=30 Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.840250 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.840563 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="proxy-httpd" containerID="cri-o://270a8e19f73ef7e52adeea79cf31f365d264c7d7e2522765851fc5834c029bbd" gracePeriod=30 Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.840660 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="sg-core" containerID="cri-o://171f9d4171956244de32ee853759f16afb430ef2c22fee64095ab36afa536b8f" gracePeriod=30 Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.840700 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-notification-agent" containerID="cri-o://a3cc0feda20cba2d8349048a59c106d83da53bc79b524b2af49af9e6366a71c7" gracePeriod=30 Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.861045 4900 scope.go:117] "RemoveContainer" containerID="8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.862480 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7272145110000001 podStartE2EDuration="5.862469804s" podCreationTimestamp="2025-12-02 14:05:07 +0000 UTC" firstStartedPulling="2025-12-02 14:05:07.885101146 +0000 UTC m=+1353.300914997" lastFinishedPulling="2025-12-02 14:05:12.020356439 +0000 UTC m=+1357.436170290" observedRunningTime="2025-12-02 14:05:12.86160696 +0000 UTC m=+1358.277420811" watchObservedRunningTime="2025-12-02 14:05:12.862469804 +0000 UTC m=+1358.278283655" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.901965 4900 scope.go:117] "RemoveContainer" containerID="561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf" Dec 02 14:05:12 crc kubenswrapper[4900]: E1202 14:05:12.903916 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf\": container with ID starting with 561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf not found: ID does not exist" containerID="561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.903960 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf"} err="failed to get container status \"561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf\": rpc error: code = NotFound desc = could not find container \"561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf\": container with ID starting with 561bfb9497dd83d45b1453dd53ea612fe443fa086ba0a04ff1c04c6da160d7cf not found: ID does not exist" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.903986 4900 scope.go:117] "RemoveContainer" containerID="8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621" Dec 02 14:05:12 crc kubenswrapper[4900]: E1202 14:05:12.904320 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621\": container with ID starting with 8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621 not found: ID does not exist" containerID="8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.904358 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621"} err="failed to get container status \"8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621\": rpc error: code = NotFound desc = could not find container \"8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621\": container with ID starting with 8f459eabe06fc9805df2822a76fe82d77511aba2ac5965a188f6d64deb79e621 not found: ID does not exist" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931419 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-public-tls-certs\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931532 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-scripts\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931553 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-combined-ca-bundle\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931591 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931687 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-config-data\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931734 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-logs\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931827 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-httpd-run\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.931896 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzx8r\" (UniqueName: \"kubernetes.io/projected/c9986100-46f5-40b2-b20c-17e127f48575-kube-api-access-rzx8r\") pod \"c9986100-46f5-40b2-b20c-17e127f48575\" (UID: \"c9986100-46f5-40b2-b20c-17e127f48575\") " Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.932681 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-logs" (OuterVolumeSpecName: "logs") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.933026 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.937241 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9986100-46f5-40b2-b20c-17e127f48575-kube-api-access-rzx8r" (OuterVolumeSpecName: "kube-api-access-rzx8r") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "kube-api-access-rzx8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.937361 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.937423 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-scripts" (OuterVolumeSpecName: "scripts") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.961510 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.994096 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-config-data" (OuterVolumeSpecName: "config-data") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:12 crc kubenswrapper[4900]: I1202 14:05:12.996249 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c9986100-46f5-40b2-b20c-17e127f48575" (UID: "c9986100-46f5-40b2-b20c-17e127f48575"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034015 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034050 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034061 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9986100-46f5-40b2-b20c-17e127f48575-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034070 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzx8r\" (UniqueName: \"kubernetes.io/projected/c9986100-46f5-40b2-b20c-17e127f48575-kube-api-access-rzx8r\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034081 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034089 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034097 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9986100-46f5-40b2-b20c-17e127f48575-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.034120 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.052701 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.135765 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.167986 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.178239 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.193623 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: E1202 14:05:13.194302 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-httpd" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.194318 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-httpd" Dec 02 14:05:13 crc kubenswrapper[4900]: E1202 14:05:13.194337 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-log" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.194346 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-log" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.194543 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-log" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.194570 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9986100-46f5-40b2-b20c-17e127f48575" containerName="glance-httpd" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.195530 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.202991 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.222511 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.223602 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341083 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341130 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341169 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb26b\" (UniqueName: \"kubernetes.io/projected/1a302619-4a69-4e62-b7cb-6812b771f6d4-kube-api-access-sb26b\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341211 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341244 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341268 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341290 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-logs\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.341333 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.442820 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.442902 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.443443 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-logs\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.442937 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-logs\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.443443 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.443589 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.445120 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.445179 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.445290 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb26b\" (UniqueName: \"kubernetes.io/projected/1a302619-4a69-4e62-b7cb-6812b771f6d4-kube-api-access-sb26b\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.445385 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.445518 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.452095 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.452297 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.452611 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.463413 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.470813 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb26b\" (UniqueName: \"kubernetes.io/projected/1a302619-4a69-4e62-b7cb-6812b771f6d4-kube-api-access-sb26b\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.484367 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.547636 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.670430 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.751783 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-internal-tls-certs\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.751821 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6cbr\" (UniqueName: \"kubernetes.io/projected/25e4b3fb-1235-4d67-b70b-53760e92e6c5-kube-api-access-d6cbr\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.751852 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-httpd-run\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.751878 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-config-data\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.751920 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-logs\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.751958 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-combined-ca-bundle\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.752009 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-scripts\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.752043 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\" (UID: \"25e4b3fb-1235-4d67-b70b-53760e92e6c5\") " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.752456 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.752739 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-logs" (OuterVolumeSpecName: "logs") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.757434 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.760115 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-scripts" (OuterVolumeSpecName: "scripts") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.764264 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e4b3fb-1235-4d67-b70b-53760e92e6c5-kube-api-access-d6cbr" (OuterVolumeSpecName: "kube-api-access-d6cbr") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "kube-api-access-d6cbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.788431 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.803178 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.804831 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-config-data" (OuterVolumeSpecName: "config-data") pod "25e4b3fb-1235-4d67-b70b-53760e92e6c5" (UID: "25e4b3fb-1235-4d67-b70b-53760e92e6c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.852139 4900 generic.go:334] "Generic (PLEG): container finished" podID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerID="2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5" exitCode=0 Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.852200 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25e4b3fb-1235-4d67-b70b-53760e92e6c5","Type":"ContainerDied","Data":"2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5"} Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.852227 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25e4b3fb-1235-4d67-b70b-53760e92e6c5","Type":"ContainerDied","Data":"6870737e0cf3ecdff8826fd07261bc7b3573137f0200b94fd8cdf5cf02530176"} Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.852243 4900 scope.go:117] "RemoveContainer" containerID="2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.852336 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861218 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861323 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861383 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861446 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6cbr\" (UniqueName: \"kubernetes.io/projected/25e4b3fb-1235-4d67-b70b-53760e92e6c5-kube-api-access-d6cbr\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861506 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861563 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861615 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25e4b3fb-1235-4d67-b70b-53760e92e6c5-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.861696 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e4b3fb-1235-4d67-b70b-53760e92e6c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.873708 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerDied","Data":"270a8e19f73ef7e52adeea79cf31f365d264c7d7e2522765851fc5834c029bbd"} Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.884369 4900 generic.go:334] "Generic (PLEG): container finished" podID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerID="270a8e19f73ef7e52adeea79cf31f365d264c7d7e2522765851fc5834c029bbd" exitCode=0 Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.884449 4900 generic.go:334] "Generic (PLEG): container finished" podID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerID="171f9d4171956244de32ee853759f16afb430ef2c22fee64095ab36afa536b8f" exitCode=2 Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.884464 4900 generic.go:334] "Generic (PLEG): container finished" podID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerID="a3cc0feda20cba2d8349048a59c106d83da53bc79b524b2af49af9e6366a71c7" exitCode=0 Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.884495 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerDied","Data":"171f9d4171956244de32ee853759f16afb430ef2c22fee64095ab36afa536b8f"} Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.884526 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerDied","Data":"a3cc0feda20cba2d8349048a59c106d83da53bc79b524b2af49af9e6366a71c7"} Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.897912 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.901551 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.907254 4900 scope.go:117] "RemoveContainer" containerID="ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.917279 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.930709 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: E1202 14:05:13.931225 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-httpd" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.931238 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-httpd" Dec 02 14:05:13 crc kubenswrapper[4900]: E1202 14:05:13.931265 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-log" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.931271 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-log" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.931454 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-log" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.931463 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" containerName="glance-httpd" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.932446 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.934914 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.935205 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.936913 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.948448 4900 scope.go:117] "RemoveContainer" containerID="2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5" Dec 02 14:05:13 crc kubenswrapper[4900]: E1202 14:05:13.949205 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5\": container with ID starting with 2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5 not found: ID does not exist" containerID="2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.949240 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5"} err="failed to get container status \"2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5\": rpc error: code = NotFound desc = could not find container \"2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5\": container with ID starting with 2133817404aaf8634a903ef2ea36d4d821c23315237c4bcad0bb18a6af071df5 not found: ID does not exist" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.949272 4900 scope.go:117] "RemoveContainer" containerID="ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6" Dec 02 14:05:13 crc kubenswrapper[4900]: E1202 14:05:13.957208 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6\": container with ID starting with ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6 not found: ID does not exist" containerID="ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.957248 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6"} err="failed to get container status \"ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6\": rpc error: code = NotFound desc = could not find container \"ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6\": container with ID starting with ad4755553700e712764bf8dea701a0a7b253dee8858ba00bf819b13d8f642ad6 not found: ID does not exist" Dec 02 14:05:13 crc kubenswrapper[4900]: I1202 14:05:13.963387 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065114 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065358 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065446 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065514 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065613 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065707 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-logs\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065772 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.065862 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vkm\" (UniqueName: \"kubernetes.io/projected/9e062e50-5a22-45c0-adab-9f78980eb851-kube-api-access-49vkm\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.119349 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:14 crc kubenswrapper[4900]: W1202 14:05:14.120253 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a302619_4a69_4e62_b7cb_6812b771f6d4.slice/crio-dfb29081b0c147c1598ac53855a5d14fb7c89cbc4ff27d4e59402a2dc5b280a0 WatchSource:0}: Error finding container dfb29081b0c147c1598ac53855a5d14fb7c89cbc4ff27d4e59402a2dc5b280a0: Status 404 returned error can't find the container with id dfb29081b0c147c1598ac53855a5d14fb7c89cbc4ff27d4e59402a2dc5b280a0 Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167509 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167588 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167618 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167641 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167722 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167751 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-logs\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167770 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.167804 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49vkm\" (UniqueName: \"kubernetes.io/projected/9e062e50-5a22-45c0-adab-9f78980eb851-kube-api-access-49vkm\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.168429 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.174252 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.174509 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.177883 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.178022 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-logs\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.184320 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.185052 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.187463 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49vkm\" (UniqueName: \"kubernetes.io/projected/9e062e50-5a22-45c0-adab-9f78980eb851-kube-api-access-49vkm\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.212558 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.258266 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.825942 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:14 crc kubenswrapper[4900]: W1202 14:05:14.826215 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e062e50_5a22_45c0_adab_9f78980eb851.slice/crio-647aaaa48b9f9742e4f1ad36b654728741cad7f8cc1bfae5cbdb5f7f261f9a51 WatchSource:0}: Error finding container 647aaaa48b9f9742e4f1ad36b654728741cad7f8cc1bfae5cbdb5f7f261f9a51: Status 404 returned error can't find the container with id 647aaaa48b9f9742e4f1ad36b654728741cad7f8cc1bfae5cbdb5f7f261f9a51 Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.923887 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e4b3fb-1235-4d67-b70b-53760e92e6c5" path="/var/lib/kubelet/pods/25e4b3fb-1235-4d67-b70b-53760e92e6c5/volumes" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.925294 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9986100-46f5-40b2-b20c-17e127f48575" path="/var/lib/kubelet/pods/c9986100-46f5-40b2-b20c-17e127f48575/volumes" Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.925966 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9e062e50-5a22-45c0-adab-9f78980eb851","Type":"ContainerStarted","Data":"647aaaa48b9f9742e4f1ad36b654728741cad7f8cc1bfae5cbdb5f7f261f9a51"} Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.925995 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a302619-4a69-4e62-b7cb-6812b771f6d4","Type":"ContainerStarted","Data":"9ad89e0edddd80cb4770a29e75a6ba59954a7129d14ce51b8b71ef393689cab0"} Dec 02 14:05:14 crc kubenswrapper[4900]: I1202 14:05:14.926009 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a302619-4a69-4e62-b7cb-6812b771f6d4","Type":"ContainerStarted","Data":"dfb29081b0c147c1598ac53855a5d14fb7c89cbc4ff27d4e59402a2dc5b280a0"} Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.116115 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.116172 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.116213 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.116940 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71201562a586bb41b092fbbc0aed881de288c0da40461c0877afbe0f47cb3b45"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.117000 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://71201562a586bb41b092fbbc0aed881de288c0da40461c0877afbe0f47cb3b45" gracePeriod=600 Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.935081 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9e062e50-5a22-45c0-adab-9f78980eb851","Type":"ContainerStarted","Data":"8b057749e05312ac8b867b23be07aedbc2a70bbb08e3cd32bd27a1b2582ac140"} Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.937717 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a302619-4a69-4e62-b7cb-6812b771f6d4","Type":"ContainerStarted","Data":"7eb9fd41a54b8a69b7cc6d75b1be55aec13e8d931357769f99ce0bad86542d63"} Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.945963 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="71201562a586bb41b092fbbc0aed881de288c0da40461c0877afbe0f47cb3b45" exitCode=0 Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.945994 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"71201562a586bb41b092fbbc0aed881de288c0da40461c0877afbe0f47cb3b45"} Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.946012 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1"} Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.946029 4900 scope.go:117] "RemoveContainer" containerID="b6f7e930d50720476a444b744878daf723fcb619125b830c5f6dce6cf097c072" Dec 02 14:05:15 crc kubenswrapper[4900]: I1202 14:05:15.965431 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.9654169809999997 podStartE2EDuration="2.965416981s" podCreationTimestamp="2025-12-02 14:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:15.95862261 +0000 UTC m=+1361.374436461" watchObservedRunningTime="2025-12-02 14:05:15.965416981 +0000 UTC m=+1361.381230832" Dec 02 14:05:16 crc kubenswrapper[4900]: I1202 14:05:16.964549 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9e062e50-5a22-45c0-adab-9f78980eb851","Type":"ContainerStarted","Data":"974bb345b96f61cacf2fe7abb81edc513dd4982f9f5eaf42cece693b6d995322"} Dec 02 14:05:17 crc kubenswrapper[4900]: I1202 14:05:17.004562 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.004534209 podStartE2EDuration="4.004534209s" podCreationTimestamp="2025-12-02 14:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:16.983118236 +0000 UTC m=+1362.398932087" watchObservedRunningTime="2025-12-02 14:05:17.004534209 +0000 UTC m=+1362.420348100" Dec 02 14:05:17 crc kubenswrapper[4900]: I1202 14:05:17.977432 4900 generic.go:334] "Generic (PLEG): container finished" podID="d28b512a-2406-4ad9-a594-7d408b8d3fb6" containerID="7ca56bba94a87eec7b0bc4f9d045dadc3d8e23854ce2eca32ac737bea66f175a" exitCode=0 Dec 02 14:05:17 crc kubenswrapper[4900]: I1202 14:05:17.977691 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bj75w" event={"ID":"d28b512a-2406-4ad9-a594-7d408b8d3fb6","Type":"ContainerDied","Data":"7ca56bba94a87eec7b0bc4f9d045dadc3d8e23854ce2eca32ac737bea66f175a"} Dec 02 14:05:18 crc kubenswrapper[4900]: I1202 14:05:18.995303 4900 generic.go:334] "Generic (PLEG): container finished" podID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerID="f4fcc034fef4eafcf4d7c6367b6f99ec6cb63494ed51fe851e20f54fcb8def4c" exitCode=0 Dec 02 14:05:18 crc kubenswrapper[4900]: I1202 14:05:18.995393 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerDied","Data":"f4fcc034fef4eafcf4d7c6367b6f99ec6cb63494ed51fe851e20f54fcb8def4c"} Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.481140 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.574029 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-combined-ca-bundle\") pod \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.574448 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf747\" (UniqueName: \"kubernetes.io/projected/d28b512a-2406-4ad9-a594-7d408b8d3fb6-kube-api-access-qf747\") pod \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.574552 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-scripts\") pod \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.574579 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-config-data\") pod \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\" (UID: \"d28b512a-2406-4ad9-a594-7d408b8d3fb6\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.581857 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-scripts" (OuterVolumeSpecName: "scripts") pod "d28b512a-2406-4ad9-a594-7d408b8d3fb6" (UID: "d28b512a-2406-4ad9-a594-7d408b8d3fb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.581909 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28b512a-2406-4ad9-a594-7d408b8d3fb6-kube-api-access-qf747" (OuterVolumeSpecName: "kube-api-access-qf747") pod "d28b512a-2406-4ad9-a594-7d408b8d3fb6" (UID: "d28b512a-2406-4ad9-a594-7d408b8d3fb6"). InnerVolumeSpecName "kube-api-access-qf747". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.587396 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.603628 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-config-data" (OuterVolumeSpecName: "config-data") pod "d28b512a-2406-4ad9-a594-7d408b8d3fb6" (UID: "d28b512a-2406-4ad9-a594-7d408b8d3fb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.611579 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d28b512a-2406-4ad9-a594-7d408b8d3fb6" (UID: "d28b512a-2406-4ad9-a594-7d408b8d3fb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675582 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-combined-ca-bundle\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675637 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-scripts\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675742 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-run-httpd\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675815 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-sg-core-conf-yaml\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675859 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-log-httpd\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675914 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bptrr\" (UniqueName: \"kubernetes.io/projected/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-kube-api-access-bptrr\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.675959 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-config-data\") pod \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\" (UID: \"c7a70f53-7cd6-4b00-8dd2-ecaede07170b\") " Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.676298 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf747\" (UniqueName: \"kubernetes.io/projected/d28b512a-2406-4ad9-a594-7d408b8d3fb6-kube-api-access-qf747\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.676315 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.676327 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.676337 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28b512a-2406-4ad9-a594-7d408b8d3fb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.676482 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.677250 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.680932 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-scripts" (OuterVolumeSpecName: "scripts") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.681443 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-kube-api-access-bptrr" (OuterVolumeSpecName: "kube-api-access-bptrr") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "kube-api-access-bptrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.700254 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.745025 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.786591 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.786768 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.786796 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bptrr\" (UniqueName: \"kubernetes.io/projected/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-kube-api-access-bptrr\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.786825 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.786847 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.786932 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.808173 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-config-data" (OuterVolumeSpecName: "config-data") pod "c7a70f53-7cd6-4b00-8dd2-ecaede07170b" (UID: "c7a70f53-7cd6-4b00-8dd2-ecaede07170b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:19 crc kubenswrapper[4900]: I1202 14:05:19.890807 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a70f53-7cd6-4b00-8dd2-ecaede07170b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.012549 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bj75w" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.012528 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bj75w" event={"ID":"d28b512a-2406-4ad9-a594-7d408b8d3fb6","Type":"ContainerDied","Data":"fcab8fba2dd84ee063be7ac90c56f73f8909e3b894d9c602b8c48f4e913f5b8f"} Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.012993 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcab8fba2dd84ee063be7ac90c56f73f8909e3b894d9c602b8c48f4e913f5b8f" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.025526 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7a70f53-7cd6-4b00-8dd2-ecaede07170b","Type":"ContainerDied","Data":"8b07a97751329fc6a9bf33a5e2fb6c490c584d499d077a73e9fd88631f28c992"} Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.025586 4900 scope.go:117] "RemoveContainer" containerID="270a8e19f73ef7e52adeea79cf31f365d264c7d7e2522765851fc5834c029bbd" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.025812 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.060382 4900 scope.go:117] "RemoveContainer" containerID="171f9d4171956244de32ee853759f16afb430ef2c22fee64095ab36afa536b8f" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.080831 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.098441 4900 scope.go:117] "RemoveContainer" containerID="a3cc0feda20cba2d8349048a59c106d83da53bc79b524b2af49af9e6366a71c7" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.099103 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.118733 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: E1202 14:05:20.119163 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28b512a-2406-4ad9-a594-7d408b8d3fb6" containerName="nova-cell0-conductor-db-sync" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119178 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28b512a-2406-4ad9-a594-7d408b8d3fb6" containerName="nova-cell0-conductor-db-sync" Dec 02 14:05:20 crc kubenswrapper[4900]: E1202 14:05:20.119209 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="sg-core" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119215 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="sg-core" Dec 02 14:05:20 crc kubenswrapper[4900]: E1202 14:05:20.119226 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="proxy-httpd" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119235 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="proxy-httpd" Dec 02 14:05:20 crc kubenswrapper[4900]: E1202 14:05:20.119256 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-central-agent" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119262 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-central-agent" Dec 02 14:05:20 crc kubenswrapper[4900]: E1202 14:05:20.119276 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-notification-agent" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119282 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-notification-agent" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119476 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="sg-core" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119494 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28b512a-2406-4ad9-a594-7d408b8d3fb6" containerName="nova-cell0-conductor-db-sync" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119507 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="proxy-httpd" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119515 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-central-agent" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.119527 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" containerName="ceilometer-notification-agent" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.121247 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.129708 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.135303 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.135559 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.137084 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.138654 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.140521 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-h7qq8" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.141718 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.153399 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.177386 4900 scope.go:117] "RemoveContainer" containerID="f4fcc034fef4eafcf4d7c6367b6f99ec6cb63494ed51fe851e20f54fcb8def4c" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.195877 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpcj\" (UniqueName: \"kubernetes.io/projected/56c7618b-ae25-4801-a633-003bd8d3c32e-kube-api-access-7kpcj\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.195926 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-run-httpd\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196167 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-log-httpd\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196227 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-config-data\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196339 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196416 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196440 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196547 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xzb9\" (UniqueName: \"kubernetes.io/projected/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-kube-api-access-8xzb9\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196639 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-scripts\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.196680 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299119 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-log-httpd\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299197 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-config-data\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299289 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299546 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299590 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299705 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xzb9\" (UniqueName: \"kubernetes.io/projected/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-kube-api-access-8xzb9\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299771 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-scripts\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299809 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299873 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpcj\" (UniqueName: \"kubernetes.io/projected/56c7618b-ae25-4801-a633-003bd8d3c32e-kube-api-access-7kpcj\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299909 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-run-httpd\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.299749 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-log-httpd\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.300547 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-run-httpd\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.306465 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-config-data\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.306704 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.307263 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.307746 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.308058 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-scripts\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.313703 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.316904 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xzb9\" (UniqueName: \"kubernetes.io/projected/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-kube-api-access-8xzb9\") pod \"nova-cell0-conductor-0\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.319930 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpcj\" (UniqueName: \"kubernetes.io/projected/56c7618b-ae25-4801-a633-003bd8d3c32e-kube-api-access-7kpcj\") pod \"ceilometer-0\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.483211 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.494080 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:20 crc kubenswrapper[4900]: W1202 14:05:20.828926 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4fd62f_751c_4ba7_8582_3d953bdc0bf6.slice/crio-94e930bae7a99946e0d51ef522ed9cfd677947ca6133a0074cf61fd4e4b9a035 WatchSource:0}: Error finding container 94e930bae7a99946e0d51ef522ed9cfd677947ca6133a0074cf61fd4e4b9a035: Status 404 returned error can't find the container with id 94e930bae7a99946e0d51ef522ed9cfd677947ca6133a0074cf61fd4e4b9a035 Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.832661 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.922183 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a70f53-7cd6-4b00-8dd2-ecaede07170b" path="/var/lib/kubelet/pods/c7a70f53-7cd6-4b00-8dd2-ecaede07170b/volumes" Dec 02 14:05:20 crc kubenswrapper[4900]: I1202 14:05:20.979468 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:21 crc kubenswrapper[4900]: I1202 14:05:21.036853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6","Type":"ContainerStarted","Data":"94e930bae7a99946e0d51ef522ed9cfd677947ca6133a0074cf61fd4e4b9a035"} Dec 02 14:05:21 crc kubenswrapper[4900]: I1202 14:05:21.040737 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerStarted","Data":"3052a039de38b05ede57b85a5e63b44715d992e19c12d26d0b491b0b5d376e45"} Dec 02 14:05:22 crc kubenswrapper[4900]: I1202 14:05:22.053031 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6","Type":"ContainerStarted","Data":"7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61"} Dec 02 14:05:22 crc kubenswrapper[4900]: I1202 14:05:22.055031 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:22 crc kubenswrapper[4900]: I1202 14:05:22.057160 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerStarted","Data":"a78d9229515bde3b62004caeaa0c53e8b6cfd9765abcd5c53f628ac1b354bf0b"} Dec 02 14:05:22 crc kubenswrapper[4900]: I1202 14:05:22.082289 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.082264476 podStartE2EDuration="2.082264476s" podCreationTimestamp="2025-12-02 14:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:22.071820601 +0000 UTC m=+1367.487634492" watchObservedRunningTime="2025-12-02 14:05:22.082264476 +0000 UTC m=+1367.498078337" Dec 02 14:05:23 crc kubenswrapper[4900]: I1202 14:05:23.071486 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerStarted","Data":"02cf5f9ac713e3835b38492821cd3b1fb09242d8b48c150fec8ab3435bad2043"} Dec 02 14:05:23 crc kubenswrapper[4900]: I1202 14:05:23.548625 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:05:23 crc kubenswrapper[4900]: I1202 14:05:23.548683 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:05:23 crc kubenswrapper[4900]: I1202 14:05:23.585447 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:05:23 crc kubenswrapper[4900]: I1202 14:05:23.593674 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.089354 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerStarted","Data":"198cd9db43c9830f9210862e7a8f1d523c886eb1022f1739cd946ccbf72cb996"} Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.091099 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.091136 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.259262 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.259339 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.309430 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:24 crc kubenswrapper[4900]: I1202 14:05:24.335438 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:25 crc kubenswrapper[4900]: I1202 14:05:25.101548 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:25 crc kubenswrapper[4900]: I1202 14:05:25.101901 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:25 crc kubenswrapper[4900]: I1202 14:05:25.997444 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4900]: I1202 14:05:26.046507 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4900]: I1202 14:05:26.118590 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:05:26 crc kubenswrapper[4900]: I1202 14:05:26.122281 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerStarted","Data":"93760c20b1d3a639d1b551228b0a3e09e86b28dbad6eb889931bbe02bc729130"} Dec 02 14:05:26 crc kubenswrapper[4900]: I1202 14:05:26.152511 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.786265943 podStartE2EDuration="6.152483426s" podCreationTimestamp="2025-12-02 14:05:20 +0000 UTC" firstStartedPulling="2025-12-02 14:05:20.997064959 +0000 UTC m=+1366.412878810" lastFinishedPulling="2025-12-02 14:05:25.363282432 +0000 UTC m=+1370.779096293" observedRunningTime="2025-12-02 14:05:26.143569144 +0000 UTC m=+1371.559383005" watchObservedRunningTime="2025-12-02 14:05:26.152483426 +0000 UTC m=+1371.568297307" Dec 02 14:05:26 crc kubenswrapper[4900]: I1202 14:05:26.879481 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4900]: I1202 14:05:26.940912 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:30 crc kubenswrapper[4900]: I1202 14:05:30.552819 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.080619 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8jhpm"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.082755 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.084727 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.086341 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.113905 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8jhpm"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.164387 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-scripts\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.164464 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.164515 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-config-data\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.164846 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrs6\" (UniqueName: \"kubernetes.io/projected/28dddd71-8010-481f-873c-b50f112e39ef-kube-api-access-fvrs6\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.270408 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-config-data\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.270888 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvrs6\" (UniqueName: \"kubernetes.io/projected/28dddd71-8010-481f-873c-b50f112e39ef-kube-api-access-fvrs6\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.271062 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-scripts\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.271125 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.280560 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.289474 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-scripts\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.290301 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-config-data\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.298547 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.303321 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.311584 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.319365 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvrs6\" (UniqueName: \"kubernetes.io/projected/28dddd71-8010-481f-873c-b50f112e39ef-kube-api-access-fvrs6\") pod \"nova-cell0-cell-mapping-8jhpm\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.337194 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.373147 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.373263 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4drw\" (UniqueName: \"kubernetes.io/projected/e99c8251-5fd6-453c-8714-a940f46b8655-kube-api-access-h4drw\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.373286 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99c8251-5fd6-453c-8714-a940f46b8655-logs\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.373489 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-config-data\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.383208 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.384729 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.392151 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.406714 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.412213 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.422673 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.432131 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.438111 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.446362 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.488703 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.488837 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4drw\" (UniqueName: \"kubernetes.io/projected/e99c8251-5fd6-453c-8714-a940f46b8655-kube-api-access-h4drw\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.488868 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99c8251-5fd6-453c-8714-a940f46b8655-logs\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.488887 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-config-data\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.495075 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99c8251-5fd6-453c-8714-a940f46b8655-logs\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.504579 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-config-data\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.511294 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.550214 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4drw\" (UniqueName: \"kubernetes.io/projected/e99c8251-5fd6-453c-8714-a940f46b8655-kube-api-access-h4drw\") pod \"nova-api-0\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.590602 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26pb\" (UniqueName: \"kubernetes.io/projected/7090d8ce-7baf-4f4f-bfd0-171b1680a843-kube-api-access-x26pb\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.590874 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.590912 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7090d8ce-7baf-4f4f-bfd0-171b1680a843-logs\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.590927 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.590998 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-config-data\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.591041 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdmb\" (UniqueName: \"kubernetes.io/projected/d6819624-09d4-47b4-86be-90d8584e2ce1-kube-api-access-btdmb\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.591067 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-config-data\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692529 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692574 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7090d8ce-7baf-4f4f-bfd0-171b1680a843-logs\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692591 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692764 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-config-data\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692811 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btdmb\" (UniqueName: \"kubernetes.io/projected/d6819624-09d4-47b4-86be-90d8584e2ce1-kube-api-access-btdmb\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692839 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-config-data\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.692890 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26pb\" (UniqueName: \"kubernetes.io/projected/7090d8ce-7baf-4f4f-bfd0-171b1680a843-kube-api-access-x26pb\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.705633 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.709814 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7090d8ce-7baf-4f4f-bfd0-171b1680a843-logs\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.714803 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.715562 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.728305 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-config-data\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.729031 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.729222 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.729263 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-config-data\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.743015 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.743105 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26pb\" (UniqueName: \"kubernetes.io/projected/7090d8ce-7baf-4f4f-bfd0-171b1680a843-kube-api-access-x26pb\") pod \"nova-metadata-0\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.748393 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdmb\" (UniqueName: \"kubernetes.io/projected/d6819624-09d4-47b4-86be-90d8584e2ce1-kube-api-access-btdmb\") pod \"nova-scheduler-0\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.758701 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-n8fgk"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.760284 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.809748 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.859687 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-n8fgk"] Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897705 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44rqr\" (UniqueName: \"kubernetes.io/projected/1cc70359-25b1-45d9-a530-5204a265158e-kube-api-access-44rqr\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897746 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kf6k\" (UniqueName: \"kubernetes.io/projected/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-kube-api-access-9kf6k\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897847 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897869 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897950 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.897987 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.898014 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-config\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.898089 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.925184 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.943778 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.999784 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:31 crc kubenswrapper[4900]: I1202 14:05:31.999858 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-config\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:31.999899 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:31.999982 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44rqr\" (UniqueName: \"kubernetes.io/projected/1cc70359-25b1-45d9-a530-5204a265158e-kube-api-access-44rqr\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.000005 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kf6k\" (UniqueName: \"kubernetes.io/projected/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-kube-api-access-9kf6k\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.000220 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.000264 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.000307 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.000358 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.000953 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-config\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.001342 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.001726 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.004229 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.004675 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.007259 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.013216 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.018210 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44rqr\" (UniqueName: \"kubernetes.io/projected/1cc70359-25b1-45d9-a530-5204a265158e-kube-api-access-44rqr\") pod \"dnsmasq-dns-bccf8f775-n8fgk\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.022198 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kf6k\" (UniqueName: \"kubernetes.io/projected/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-kube-api-access-9kf6k\") pod \"nova-cell1-novncproxy-0\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.068576 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.112606 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.127490 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8jhpm"] Dec 02 14:05:32 crc kubenswrapper[4900]: W1202 14:05:32.154998 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28dddd71_8010_481f_873c_b50f112e39ef.slice/crio-96ef52fab34cd37015e7af60fad71f20ee81d2422af1cf513bb1ec5b4d4c6ef1 WatchSource:0}: Error finding container 96ef52fab34cd37015e7af60fad71f20ee81d2422af1cf513bb1ec5b4d4c6ef1: Status 404 returned error can't find the container with id 96ef52fab34cd37015e7af60fad71f20ee81d2422af1cf513bb1ec5b4d4c6ef1 Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.217220 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8jhpm" event={"ID":"28dddd71-8010-481f-873c-b50f112e39ef","Type":"ContainerStarted","Data":"96ef52fab34cd37015e7af60fad71f20ee81d2422af1cf513bb1ec5b4d4c6ef1"} Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.298454 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z7hzq"] Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.299961 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.303794 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.303966 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.319660 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z7hzq"] Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.353803 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.362129 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.418964 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-scripts\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.419052 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqznx\" (UniqueName: \"kubernetes.io/projected/62727648-546e-4e0e-9786-75f8bcd2e332-kube-api-access-nqznx\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.419082 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-config-data\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.419141 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.500523 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:05:32 crc kubenswrapper[4900]: W1202 14:05:32.501221 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7090d8ce_7baf_4f4f_bfd0_171b1680a843.slice/crio-fb5583668550941fda56994024a5250f8657f000cf1f0e71a253c37dba7b405b WatchSource:0}: Error finding container fb5583668550941fda56994024a5250f8657f000cf1f0e71a253c37dba7b405b: Status 404 returned error can't find the container with id fb5583668550941fda56994024a5250f8657f000cf1f0e71a253c37dba7b405b Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.520930 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-scripts\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.521010 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqznx\" (UniqueName: \"kubernetes.io/projected/62727648-546e-4e0e-9786-75f8bcd2e332-kube-api-access-nqznx\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.521043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-config-data\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.521097 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.529284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-scripts\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.529359 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.529495 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-config-data\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.544910 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqznx\" (UniqueName: \"kubernetes.io/projected/62727648-546e-4e0e-9786-75f8bcd2e332-kube-api-access-nqznx\") pod \"nova-cell1-conductor-db-sync-z7hzq\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.644599 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:32 crc kubenswrapper[4900]: W1202 14:05:32.646011 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6819624_09d4_47b4_86be_90d8584e2ce1.slice/crio-89d3026ca0169cdc01384cfad3d101736901f3e9eac2047fb43a8c9cc84af117 WatchSource:0}: Error finding container 89d3026ca0169cdc01384cfad3d101736901f3e9eac2047fb43a8c9cc84af117: Status 404 returned error can't find the container with id 89d3026ca0169cdc01384cfad3d101736901f3e9eac2047fb43a8c9cc84af117 Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.720843 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.743918 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-n8fgk"] Dec 02 14:05:32 crc kubenswrapper[4900]: I1202 14:05:32.759091 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:05:32 crc kubenswrapper[4900]: W1202 14:05:32.771402 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cc70359_25b1_45d9_a530_5204a265158e.slice/crio-a03ebadfc03596fec6bd639ecbd19ef1cc65f4fef56fd1758a1ff370e1e3d2d3 WatchSource:0}: Error finding container a03ebadfc03596fec6bd639ecbd19ef1cc65f4fef56fd1758a1ff370e1e3d2d3: Status 404 returned error can't find the container with id a03ebadfc03596fec6bd639ecbd19ef1cc65f4fef56fd1758a1ff370e1e3d2d3 Dec 02 14:05:32 crc kubenswrapper[4900]: W1202 14:05:32.773906 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfce44d9e_d5f8_4625_b8f2_2c77ff046f7e.slice/crio-5cbd717e28ca94df63da2152f7a1d6f8d24f7c5d9441fd3a03891ce3d2f5d760 WatchSource:0}: Error finding container 5cbd717e28ca94df63da2152f7a1d6f8d24f7c5d9441fd3a03891ce3d2f5d760: Status 404 returned error can't find the container with id 5cbd717e28ca94df63da2152f7a1d6f8d24f7c5d9441fd3a03891ce3d2f5d760 Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.220935 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z7hzq"] Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.235844 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7090d8ce-7baf-4f4f-bfd0-171b1680a843","Type":"ContainerStarted","Data":"fb5583668550941fda56994024a5250f8657f000cf1f0e71a253c37dba7b405b"} Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.237675 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99c8251-5fd6-453c-8714-a940f46b8655","Type":"ContainerStarted","Data":"e22098215ce5b9b8c2e4dc996fcc65c0e3cc7ab200a14f5cd20eb8ea5ec0552b"} Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.239340 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" event={"ID":"62727648-546e-4e0e-9786-75f8bcd2e332","Type":"ContainerStarted","Data":"9b79f3add1babb6eb95c243b74f8e61b43f0a67519de098c16366af735fa7790"} Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.241247 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8jhpm" event={"ID":"28dddd71-8010-481f-873c-b50f112e39ef","Type":"ContainerStarted","Data":"c62bcbaedf33f7593499ebbd156d4d7ef64b82ebebff13a7f3d1815cbb3551b2"} Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.242493 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" event={"ID":"1cc70359-25b1-45d9-a530-5204a265158e","Type":"ContainerStarted","Data":"a03ebadfc03596fec6bd639ecbd19ef1cc65f4fef56fd1758a1ff370e1e3d2d3"} Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.251838 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e","Type":"ContainerStarted","Data":"5cbd717e28ca94df63da2152f7a1d6f8d24f7c5d9441fd3a03891ce3d2f5d760"} Dec 02 14:05:33 crc kubenswrapper[4900]: I1202 14:05:33.252485 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6819624-09d4-47b4-86be-90d8584e2ce1","Type":"ContainerStarted","Data":"89d3026ca0169cdc01384cfad3d101736901f3e9eac2047fb43a8c9cc84af117"} Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.285545 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" event={"ID":"62727648-546e-4e0e-9786-75f8bcd2e332","Type":"ContainerStarted","Data":"47cd53e70fc8f37391f555876fc14fd173fb43c514c6588428b1c4d99716e7be"} Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.288218 4900 generic.go:334] "Generic (PLEG): container finished" podID="1cc70359-25b1-45d9-a530-5204a265158e" containerID="553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd" exitCode=0 Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.288326 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" event={"ID":"1cc70359-25b1-45d9-a530-5204a265158e","Type":"ContainerDied","Data":"553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd"} Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.304711 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" podStartSLOduration=3.304697703 podStartE2EDuration="3.304697703s" podCreationTimestamp="2025-12-02 14:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:35.304191848 +0000 UTC m=+1380.720005719" watchObservedRunningTime="2025-12-02 14:05:35.304697703 +0000 UTC m=+1380.720511554" Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.330029 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8jhpm" podStartSLOduration=4.330001546 podStartE2EDuration="4.330001546s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:35.324189962 +0000 UTC m=+1380.740003833" watchObservedRunningTime="2025-12-02 14:05:35.330001546 +0000 UTC m=+1380.745815397" Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.737406 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:05:35 crc kubenswrapper[4900]: I1202 14:05:35.755705 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:05:36 crc kubenswrapper[4900]: I1202 14:05:36.306087 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" event={"ID":"1cc70359-25b1-45d9-a530-5204a265158e","Type":"ContainerStarted","Data":"238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115"} Dec 02 14:05:36 crc kubenswrapper[4900]: I1202 14:05:36.306140 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:36 crc kubenswrapper[4900]: I1202 14:05:36.329496 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" podStartSLOduration=5.329476486 podStartE2EDuration="5.329476486s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:36.32179592 +0000 UTC m=+1381.737609771" watchObservedRunningTime="2025-12-02 14:05:36.329476486 +0000 UTC m=+1381.745290337" Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.325137 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7090d8ce-7baf-4f4f-bfd0-171b1680a843","Type":"ContainerStarted","Data":"720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082"} Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.325693 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7090d8ce-7baf-4f4f-bfd0-171b1680a843","Type":"ContainerStarted","Data":"f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018"} Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.325579 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-metadata" containerID="cri-o://720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082" gracePeriod=30 Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.325315 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-log" containerID="cri-o://f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018" gracePeriod=30 Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.342919 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99c8251-5fd6-453c-8714-a940f46b8655","Type":"ContainerStarted","Data":"c520b85a547bf13b1e83e66f7b5a8281322844a8b7d7de8587e8cf1aec9a5943"} Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.343207 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99c8251-5fd6-453c-8714-a940f46b8655","Type":"ContainerStarted","Data":"fc5c9018b82428ef5e5f1304fdb9ff60e961aa187b7f46a2cef4f807d0b2ce75"} Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.346460 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e","Type":"ContainerStarted","Data":"990529831e051ff4684e490bb271b7ec920ff47682a830e299381f6386846e18"} Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.346593 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://990529831e051ff4684e490bb271b7ec920ff47682a830e299381f6386846e18" gracePeriod=30 Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.354885 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6819624-09d4-47b4-86be-90d8584e2ce1","Type":"ContainerStarted","Data":"468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a"} Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.370881 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.32056536 podStartE2EDuration="7.370865114s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="2025-12-02 14:05:32.505285201 +0000 UTC m=+1377.921099052" lastFinishedPulling="2025-12-02 14:05:37.555584955 +0000 UTC m=+1382.971398806" observedRunningTime="2025-12-02 14:05:38.368280211 +0000 UTC m=+1383.784094102" watchObservedRunningTime="2025-12-02 14:05:38.370865114 +0000 UTC m=+1383.786678965" Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.397205 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.193687774 podStartE2EDuration="7.397185526s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="2025-12-02 14:05:32.361955671 +0000 UTC m=+1377.777769522" lastFinishedPulling="2025-12-02 14:05:37.565453413 +0000 UTC m=+1382.981267274" observedRunningTime="2025-12-02 14:05:38.395040345 +0000 UTC m=+1383.810854196" watchObservedRunningTime="2025-12-02 14:05:38.397185526 +0000 UTC m=+1383.812999387" Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.423595 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.51751075 podStartE2EDuration="7.423562189s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="2025-12-02 14:05:32.660737052 +0000 UTC m=+1378.076550903" lastFinishedPulling="2025-12-02 14:05:37.566788481 +0000 UTC m=+1382.982602342" observedRunningTime="2025-12-02 14:05:38.41790148 +0000 UTC m=+1383.833715331" watchObservedRunningTime="2025-12-02 14:05:38.423562189 +0000 UTC m=+1383.839376040" Dec 02 14:05:38 crc kubenswrapper[4900]: I1202 14:05:38.435230 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.647515405 podStartE2EDuration="7.435210048s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="2025-12-02 14:05:32.7777425 +0000 UTC m=+1378.193556351" lastFinishedPulling="2025-12-02 14:05:37.565437143 +0000 UTC m=+1382.981250994" observedRunningTime="2025-12-02 14:05:38.431628037 +0000 UTC m=+1383.847441888" watchObservedRunningTime="2025-12-02 14:05:38.435210048 +0000 UTC m=+1383.851023899" Dec 02 14:05:39 crc kubenswrapper[4900]: I1202 14:05:39.379110 4900 generic.go:334] "Generic (PLEG): container finished" podID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerID="f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018" exitCode=143 Dec 02 14:05:39 crc kubenswrapper[4900]: I1202 14:05:39.379230 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7090d8ce-7baf-4f4f-bfd0-171b1680a843","Type":"ContainerDied","Data":"f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018"} Dec 02 14:05:40 crc kubenswrapper[4900]: I1202 14:05:40.393544 4900 generic.go:334] "Generic (PLEG): container finished" podID="28dddd71-8010-481f-873c-b50f112e39ef" containerID="c62bcbaedf33f7593499ebbd156d4d7ef64b82ebebff13a7f3d1815cbb3551b2" exitCode=0 Dec 02 14:05:40 crc kubenswrapper[4900]: I1202 14:05:40.393687 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8jhpm" event={"ID":"28dddd71-8010-481f-873c-b50f112e39ef","Type":"ContainerDied","Data":"c62bcbaedf33f7593499ebbd156d4d7ef64b82ebebff13a7f3d1815cbb3551b2"} Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.730193 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.730636 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.822451 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.926315 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.926360 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.944486 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.944527 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.954014 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-scripts\") pod \"28dddd71-8010-481f-873c-b50f112e39ef\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.954084 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvrs6\" (UniqueName: \"kubernetes.io/projected/28dddd71-8010-481f-873c-b50f112e39ef-kube-api-access-fvrs6\") pod \"28dddd71-8010-481f-873c-b50f112e39ef\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.954312 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-combined-ca-bundle\") pod \"28dddd71-8010-481f-873c-b50f112e39ef\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.954358 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-config-data\") pod \"28dddd71-8010-481f-873c-b50f112e39ef\" (UID: \"28dddd71-8010-481f-873c-b50f112e39ef\") " Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.960343 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-scripts" (OuterVolumeSpecName: "scripts") pod "28dddd71-8010-481f-873c-b50f112e39ef" (UID: "28dddd71-8010-481f-873c-b50f112e39ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:41 crc kubenswrapper[4900]: I1202 14:05:41.994758 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28dddd71-8010-481f-873c-b50f112e39ef-kube-api-access-fvrs6" (OuterVolumeSpecName: "kube-api-access-fvrs6") pod "28dddd71-8010-481f-873c-b50f112e39ef" (UID: "28dddd71-8010-481f-873c-b50f112e39ef"). InnerVolumeSpecName "kube-api-access-fvrs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.010489 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.032566 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28dddd71-8010-481f-873c-b50f112e39ef" (UID: "28dddd71-8010-481f-873c-b50f112e39ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.033801 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-config-data" (OuterVolumeSpecName: "config-data") pod "28dddd71-8010-481f-873c-b50f112e39ef" (UID: "28dddd71-8010-481f-873c-b50f112e39ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.056759 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.056803 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.056815 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dddd71-8010-481f-873c-b50f112e39ef-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.056827 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvrs6\" (UniqueName: \"kubernetes.io/projected/28dddd71-8010-481f-873c-b50f112e39ef-kube-api-access-fvrs6\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.069895 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.113835 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.175740 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-bhwdt"] Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.175976 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerName="dnsmasq-dns" containerID="cri-o://86a3abf1966efd90643fdf0d03f758199c3718b6ab23ca089293aa663fb76b77" gracePeriod=10 Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.416920 4900 generic.go:334] "Generic (PLEG): container finished" podID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerID="86a3abf1966efd90643fdf0d03f758199c3718b6ab23ca089293aa663fb76b77" exitCode=0 Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.416993 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" event={"ID":"a352e2a1-219a-4b55-9d8f-d4607dd3890c","Type":"ContainerDied","Data":"86a3abf1966efd90643fdf0d03f758199c3718b6ab23ca089293aa663fb76b77"} Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.424364 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8jhpm" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.424586 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8jhpm" event={"ID":"28dddd71-8010-481f-873c-b50f112e39ef","Type":"ContainerDied","Data":"96ef52fab34cd37015e7af60fad71f20ee81d2422af1cf513bb1ec5b4d4c6ef1"} Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.424637 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ef52fab34cd37015e7af60fad71f20ee81d2422af1cf513bb1ec5b4d4c6ef1" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.486347 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.609162 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.611968 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-log" containerID="cri-o://fc5c9018b82428ef5e5f1304fdb9ff60e961aa187b7f46a2cef4f807d0b2ce75" gracePeriod=30 Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.612430 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-api" containerID="cri-o://c520b85a547bf13b1e83e66f7b5a8281322844a8b7d7de8587e8cf1aec9a5943" gracePeriod=30 Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.622459 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": EOF" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.626071 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": EOF" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.818549 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.992290 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-swift-storage-0\") pod \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.992350 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-config\") pod \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.992391 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-nb\") pod \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.992962 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-sb\") pod \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.993007 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-svc\") pod \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " Dec 02 14:05:42 crc kubenswrapper[4900]: I1202 14:05:42.993048 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qqnl\" (UniqueName: \"kubernetes.io/projected/a352e2a1-219a-4b55-9d8f-d4607dd3890c-kube-api-access-8qqnl\") pod \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\" (UID: \"a352e2a1-219a-4b55-9d8f-d4607dd3890c\") " Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.004752 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a352e2a1-219a-4b55-9d8f-d4607dd3890c-kube-api-access-8qqnl" (OuterVolumeSpecName: "kube-api-access-8qqnl") pod "a352e2a1-219a-4b55-9d8f-d4607dd3890c" (UID: "a352e2a1-219a-4b55-9d8f-d4607dd3890c"). InnerVolumeSpecName "kube-api-access-8qqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.051349 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a352e2a1-219a-4b55-9d8f-d4607dd3890c" (UID: "a352e2a1-219a-4b55-9d8f-d4607dd3890c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.051668 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-config" (OuterVolumeSpecName: "config") pod "a352e2a1-219a-4b55-9d8f-d4607dd3890c" (UID: "a352e2a1-219a-4b55-9d8f-d4607dd3890c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.067106 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a352e2a1-219a-4b55-9d8f-d4607dd3890c" (UID: "a352e2a1-219a-4b55-9d8f-d4607dd3890c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.070279 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a352e2a1-219a-4b55-9d8f-d4607dd3890c" (UID: "a352e2a1-219a-4b55-9d8f-d4607dd3890c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.093064 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a352e2a1-219a-4b55-9d8f-d4607dd3890c" (UID: "a352e2a1-219a-4b55-9d8f-d4607dd3890c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.094877 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.094897 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.094910 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.094918 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.094927 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qqnl\" (UniqueName: \"kubernetes.io/projected/a352e2a1-219a-4b55-9d8f-d4607dd3890c-kube-api-access-8qqnl\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.094939 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a352e2a1-219a-4b55-9d8f-d4607dd3890c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.180033 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.433429 4900 generic.go:334] "Generic (PLEG): container finished" podID="e99c8251-5fd6-453c-8714-a940f46b8655" containerID="fc5c9018b82428ef5e5f1304fdb9ff60e961aa187b7f46a2cef4f807d0b2ce75" exitCode=143 Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.433548 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99c8251-5fd6-453c-8714-a940f46b8655","Type":"ContainerDied","Data":"fc5c9018b82428ef5e5f1304fdb9ff60e961aa187b7f46a2cef4f807d0b2ce75"} Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.435638 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" event={"ID":"a352e2a1-219a-4b55-9d8f-d4607dd3890c","Type":"ContainerDied","Data":"1148ef1d96c8ce5cd47f283c998f78148fe6fcde2f5300054cd03a8143bd3245"} Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.435699 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-bhwdt" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.435709 4900 scope.go:117] "RemoveContainer" containerID="86a3abf1966efd90643fdf0d03f758199c3718b6ab23ca089293aa663fb76b77" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.471637 4900 scope.go:117] "RemoveContainer" containerID="3747c7dcd5e0e5831178f5667e047c7cb17f5cffb0c9a78b64fbfa803f7af3e0" Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.479751 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-bhwdt"] Dec 02 14:05:43 crc kubenswrapper[4900]: I1202 14:05:43.504387 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-bhwdt"] Dec 02 14:05:44 crc kubenswrapper[4900]: I1202 14:05:44.447315 4900 generic.go:334] "Generic (PLEG): container finished" podID="62727648-546e-4e0e-9786-75f8bcd2e332" containerID="47cd53e70fc8f37391f555876fc14fd173fb43c514c6588428b1c4d99716e7be" exitCode=0 Dec 02 14:05:44 crc kubenswrapper[4900]: I1202 14:05:44.447455 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" event={"ID":"62727648-546e-4e0e-9786-75f8bcd2e332","Type":"ContainerDied","Data":"47cd53e70fc8f37391f555876fc14fd173fb43c514c6588428b1c4d99716e7be"} Dec 02 14:05:44 crc kubenswrapper[4900]: I1202 14:05:44.448985 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d6819624-09d4-47b4-86be-90d8584e2ce1" containerName="nova-scheduler-scheduler" containerID="cri-o://468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" gracePeriod=30 Dec 02 14:05:44 crc kubenswrapper[4900]: I1202 14:05:44.929537 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" path="/var/lib/kubelet/pods/a352e2a1-219a-4b55-9d8f-d4607dd3890c/volumes" Dec 02 14:05:45 crc kubenswrapper[4900]: I1202 14:05:45.911428 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.057406 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqznx\" (UniqueName: \"kubernetes.io/projected/62727648-546e-4e0e-9786-75f8bcd2e332-kube-api-access-nqznx\") pod \"62727648-546e-4e0e-9786-75f8bcd2e332\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.058010 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-scripts\") pod \"62727648-546e-4e0e-9786-75f8bcd2e332\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.058140 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-combined-ca-bundle\") pod \"62727648-546e-4e0e-9786-75f8bcd2e332\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.058216 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-config-data\") pod \"62727648-546e-4e0e-9786-75f8bcd2e332\" (UID: \"62727648-546e-4e0e-9786-75f8bcd2e332\") " Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.072122 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-scripts" (OuterVolumeSpecName: "scripts") pod "62727648-546e-4e0e-9786-75f8bcd2e332" (UID: "62727648-546e-4e0e-9786-75f8bcd2e332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.072134 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62727648-546e-4e0e-9786-75f8bcd2e332-kube-api-access-nqznx" (OuterVolumeSpecName: "kube-api-access-nqznx") pod "62727648-546e-4e0e-9786-75f8bcd2e332" (UID: "62727648-546e-4e0e-9786-75f8bcd2e332"). InnerVolumeSpecName "kube-api-access-nqznx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.112779 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-config-data" (OuterVolumeSpecName: "config-data") pod "62727648-546e-4e0e-9786-75f8bcd2e332" (UID: "62727648-546e-4e0e-9786-75f8bcd2e332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.117699 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62727648-546e-4e0e-9786-75f8bcd2e332" (UID: "62727648-546e-4e0e-9786-75f8bcd2e332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.160871 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqznx\" (UniqueName: \"kubernetes.io/projected/62727648-546e-4e0e-9786-75f8bcd2e332-kube-api-access-nqznx\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.160916 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.160961 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.160976 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62727648-546e-4e0e-9786-75f8bcd2e332-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.482710 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" event={"ID":"62727648-546e-4e0e-9786-75f8bcd2e332","Type":"ContainerDied","Data":"9b79f3add1babb6eb95c243b74f8e61b43f0a67519de098c16366af735fa7790"} Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.482776 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b79f3add1babb6eb95c243b74f8e61b43f0a67519de098c16366af735fa7790" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.482816 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-z7hzq" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.600209 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.600707 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dddd71-8010-481f-873c-b50f112e39ef" containerName="nova-manage" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.600725 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dddd71-8010-481f-873c-b50f112e39ef" containerName="nova-manage" Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.600752 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerName="init" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.600759 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerName="init" Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.600774 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerName="dnsmasq-dns" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.600781 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerName="dnsmasq-dns" Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.600791 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62727648-546e-4e0e-9786-75f8bcd2e332" containerName="nova-cell1-conductor-db-sync" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.600797 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="62727648-546e-4e0e-9786-75f8bcd2e332" containerName="nova-cell1-conductor-db-sync" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.601002 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="62727648-546e-4e0e-9786-75f8bcd2e332" containerName="nova-cell1-conductor-db-sync" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.601019 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="28dddd71-8010-481f-873c-b50f112e39ef" containerName="nova-manage" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.601027 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a352e2a1-219a-4b55-9d8f-d4607dd3890c" containerName="dnsmasq-dns" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.601718 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.604722 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.620386 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.773441 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.773666 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.773861 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkvt\" (UniqueName: \"kubernetes.io/projected/784ffd24-69a7-4235-9d4d-4a1be6f183fd-kube-api-access-slkvt\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.876726 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.876840 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.876936 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkvt\" (UniqueName: \"kubernetes.io/projected/784ffd24-69a7-4235-9d4d-4a1be6f183fd-kube-api-access-slkvt\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.881903 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.883123 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.909314 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkvt\" (UniqueName: \"kubernetes.io/projected/784ffd24-69a7-4235-9d4d-4a1be6f183fd-kube-api-access-slkvt\") pod \"nova-cell1-conductor-0\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: I1202 14:05:46.921762 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.947470 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.952362 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.955428 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:05:46 crc kubenswrapper[4900]: E1202 14:05:46.955555 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d6819624-09d4-47b4-86be-90d8584e2ce1" containerName="nova-scheduler-scheduler" Dec 02 14:05:47 crc kubenswrapper[4900]: I1202 14:05:47.456978 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:05:47 crc kubenswrapper[4900]: W1202 14:05:47.465394 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod784ffd24_69a7_4235_9d4d_4a1be6f183fd.slice/crio-093fa19a335b0f733595da6c62f5afca9b7b49c8443e9283e1ae31c2c4299a47 WatchSource:0}: Error finding container 093fa19a335b0f733595da6c62f5afca9b7b49c8443e9283e1ae31c2c4299a47: Status 404 returned error can't find the container with id 093fa19a335b0f733595da6c62f5afca9b7b49c8443e9283e1ae31c2c4299a47 Dec 02 14:05:47 crc kubenswrapper[4900]: I1202 14:05:47.495790 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"784ffd24-69a7-4235-9d4d-4a1be6f183fd","Type":"ContainerStarted","Data":"093fa19a335b0f733595da6c62f5afca9b7b49c8443e9283e1ae31c2c4299a47"} Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.516244 4900 generic.go:334] "Generic (PLEG): container finished" podID="e99c8251-5fd6-453c-8714-a940f46b8655" containerID="c520b85a547bf13b1e83e66f7b5a8281322844a8b7d7de8587e8cf1aec9a5943" exitCode=0 Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.516585 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99c8251-5fd6-453c-8714-a940f46b8655","Type":"ContainerDied","Data":"c520b85a547bf13b1e83e66f7b5a8281322844a8b7d7de8587e8cf1aec9a5943"} Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.516764 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e99c8251-5fd6-453c-8714-a940f46b8655","Type":"ContainerDied","Data":"e22098215ce5b9b8c2e4dc996fcc65c0e3cc7ab200a14f5cd20eb8ea5ec0552b"} Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.516784 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22098215ce5b9b8c2e4dc996fcc65c0e3cc7ab200a14f5cd20eb8ea5ec0552b" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.519302 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"784ffd24-69a7-4235-9d4d-4a1be6f183fd","Type":"ContainerStarted","Data":"510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb"} Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.520379 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.570325 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.570306788 podStartE2EDuration="2.570306788s" podCreationTimestamp="2025-12-02 14:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:48.552860976 +0000 UTC m=+1393.968674847" watchObservedRunningTime="2025-12-02 14:05:48.570306788 +0000 UTC m=+1393.986120649" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.572560 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.715611 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-config-data\") pod \"e99c8251-5fd6-453c-8714-a940f46b8655\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.715982 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99c8251-5fd6-453c-8714-a940f46b8655-logs\") pod \"e99c8251-5fd6-453c-8714-a940f46b8655\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.716082 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4drw\" (UniqueName: \"kubernetes.io/projected/e99c8251-5fd6-453c-8714-a940f46b8655-kube-api-access-h4drw\") pod \"e99c8251-5fd6-453c-8714-a940f46b8655\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.716124 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-combined-ca-bundle\") pod \"e99c8251-5fd6-453c-8714-a940f46b8655\" (UID: \"e99c8251-5fd6-453c-8714-a940f46b8655\") " Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.716373 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e99c8251-5fd6-453c-8714-a940f46b8655-logs" (OuterVolumeSpecName: "logs") pod "e99c8251-5fd6-453c-8714-a940f46b8655" (UID: "e99c8251-5fd6-453c-8714-a940f46b8655"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.716939 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e99c8251-5fd6-453c-8714-a940f46b8655-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.723480 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99c8251-5fd6-453c-8714-a940f46b8655-kube-api-access-h4drw" (OuterVolumeSpecName: "kube-api-access-h4drw") pod "e99c8251-5fd6-453c-8714-a940f46b8655" (UID: "e99c8251-5fd6-453c-8714-a940f46b8655"). InnerVolumeSpecName "kube-api-access-h4drw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.752134 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-config-data" (OuterVolumeSpecName: "config-data") pod "e99c8251-5fd6-453c-8714-a940f46b8655" (UID: "e99c8251-5fd6-453c-8714-a940f46b8655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.771777 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e99c8251-5fd6-453c-8714-a940f46b8655" (UID: "e99c8251-5fd6-453c-8714-a940f46b8655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.818819 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.818875 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4drw\" (UniqueName: \"kubernetes.io/projected/e99c8251-5fd6-453c-8714-a940f46b8655-kube-api-access-h4drw\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:48 crc kubenswrapper[4900]: I1202 14:05:48.818900 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99c8251-5fd6-453c-8714-a940f46b8655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.380845 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.538890 4900 generic.go:334] "Generic (PLEG): container finished" podID="d6819624-09d4-47b4-86be-90d8584e2ce1" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" exitCode=0 Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.538954 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.539030 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6819624-09d4-47b4-86be-90d8584e2ce1","Type":"ContainerDied","Data":"468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a"} Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.539100 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6819624-09d4-47b4-86be-90d8584e2ce1","Type":"ContainerDied","Data":"89d3026ca0169cdc01384cfad3d101736901f3e9eac2047fb43a8c9cc84af117"} Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.539134 4900 scope.go:117] "RemoveContainer" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.539418 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.540688 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-config-data\") pod \"d6819624-09d4-47b4-86be-90d8584e2ce1\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.540901 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-combined-ca-bundle\") pod \"d6819624-09d4-47b4-86be-90d8584e2ce1\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.540941 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btdmb\" (UniqueName: \"kubernetes.io/projected/d6819624-09d4-47b4-86be-90d8584e2ce1-kube-api-access-btdmb\") pod \"d6819624-09d4-47b4-86be-90d8584e2ce1\" (UID: \"d6819624-09d4-47b4-86be-90d8584e2ce1\") " Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.570022 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6819624-09d4-47b4-86be-90d8584e2ce1-kube-api-access-btdmb" (OuterVolumeSpecName: "kube-api-access-btdmb") pod "d6819624-09d4-47b4-86be-90d8584e2ce1" (UID: "d6819624-09d4-47b4-86be-90d8584e2ce1"). InnerVolumeSpecName "kube-api-access-btdmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.593937 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.600706 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6819624-09d4-47b4-86be-90d8584e2ce1" (UID: "d6819624-09d4-47b4-86be-90d8584e2ce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.603809 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-config-data" (OuterVolumeSpecName: "config-data") pod "d6819624-09d4-47b4-86be-90d8584e2ce1" (UID: "d6819624-09d4-47b4-86be-90d8584e2ce1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.614632 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.629848 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: E1202 14:05:49.630298 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-api" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.630316 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-api" Dec 02 14:05:49 crc kubenswrapper[4900]: E1202 14:05:49.630329 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-log" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.630336 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-log" Dec 02 14:05:49 crc kubenswrapper[4900]: E1202 14:05:49.630374 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6819624-09d4-47b4-86be-90d8584e2ce1" containerName="nova-scheduler-scheduler" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.630380 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6819624-09d4-47b4-86be-90d8584e2ce1" containerName="nova-scheduler-scheduler" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.630535 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-log" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.630565 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" containerName="nova-api-api" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.630578 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6819624-09d4-47b4-86be-90d8584e2ce1" containerName="nova-scheduler-scheduler" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.633836 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.636666 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.639785 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.643713 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.643741 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6819624-09d4-47b4-86be-90d8584e2ce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.643750 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btdmb\" (UniqueName: \"kubernetes.io/projected/d6819624-09d4-47b4-86be-90d8584e2ce1-kube-api-access-btdmb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.659466 4900 scope.go:117] "RemoveContainer" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" Dec 02 14:05:49 crc kubenswrapper[4900]: E1202 14:05:49.662223 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a\": container with ID starting with 468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a not found: ID does not exist" containerID="468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.662264 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a"} err="failed to get container status \"468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a\": rpc error: code = NotFound desc = could not find container \"468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a\": container with ID starting with 468c6090fd1b549a6b876788ae954bae1566e5db294821b59fe8f817c278032a not found: ID does not exist" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.746428 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l54n\" (UniqueName: \"kubernetes.io/projected/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-kube-api-access-2l54n\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.746790 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.746976 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-logs\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.747025 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-config-data\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.849235 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.849619 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-logs\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.849679 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-config-data\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.849808 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l54n\" (UniqueName: \"kubernetes.io/projected/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-kube-api-access-2l54n\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.850458 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-logs\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.856104 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.856405 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-config-data\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.878549 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l54n\" (UniqueName: \"kubernetes.io/projected/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-kube-api-access-2l54n\") pod \"nova-api-0\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.903381 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.927097 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.939563 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.941142 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.945245 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.969965 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:05:49 crc kubenswrapper[4900]: I1202 14:05:49.973053 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.054812 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-config-data\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.055071 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.055100 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsg6l\" (UniqueName: \"kubernetes.io/projected/16489417-050d-44b6-9a14-c2a7db666022-kube-api-access-qsg6l\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.156513 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.156562 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsg6l\" (UniqueName: \"kubernetes.io/projected/16489417-050d-44b6-9a14-c2a7db666022-kube-api-access-qsg6l\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.156610 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-config-data\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.161270 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.165912 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-config-data\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.182971 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsg6l\" (UniqueName: \"kubernetes.io/projected/16489417-050d-44b6-9a14-c2a7db666022-kube-api-access-qsg6l\") pod \"nova-scheduler-0\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.363583 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.410258 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:05:50 crc kubenswrapper[4900]: W1202 14:05:50.414103 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c5c9338_4abd_4a13_ad8e_af4dfda3309e.slice/crio-39a698985bea34e0aefcca4bf9bdff44dd5ad0d75e10654e0d09c159017ad094 WatchSource:0}: Error finding container 39a698985bea34e0aefcca4bf9bdff44dd5ad0d75e10654e0d09c159017ad094: Status 404 returned error can't find the container with id 39a698985bea34e0aefcca4bf9bdff44dd5ad0d75e10654e0d09c159017ad094 Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.492803 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.565094 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c5c9338-4abd-4a13-ad8e-af4dfda3309e","Type":"ContainerStarted","Data":"39a698985bea34e0aefcca4bf9bdff44dd5ad0d75e10654e0d09c159017ad094"} Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.889390 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:05:50 crc kubenswrapper[4900]: W1202 14:05:50.890250 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16489417_050d_44b6_9a14_c2a7db666022.slice/crio-a3d4b38cab61efc11c8ae718ec4bff3ef7f0f4c8d73821666ac12b46ec450a37 WatchSource:0}: Error finding container a3d4b38cab61efc11c8ae718ec4bff3ef7f0f4c8d73821666ac12b46ec450a37: Status 404 returned error can't find the container with id a3d4b38cab61efc11c8ae718ec4bff3ef7f0f4c8d73821666ac12b46ec450a37 Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.924608 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6819624-09d4-47b4-86be-90d8584e2ce1" path="/var/lib/kubelet/pods/d6819624-09d4-47b4-86be-90d8584e2ce1/volumes" Dec 02 14:05:50 crc kubenswrapper[4900]: I1202 14:05:50.926061 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99c8251-5fd6-453c-8714-a940f46b8655" path="/var/lib/kubelet/pods/e99c8251-5fd6-453c-8714-a940f46b8655/volumes" Dec 02 14:05:51 crc kubenswrapper[4900]: I1202 14:05:51.580746 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16489417-050d-44b6-9a14-c2a7db666022","Type":"ContainerStarted","Data":"02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7"} Dec 02 14:05:51 crc kubenswrapper[4900]: I1202 14:05:51.581116 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16489417-050d-44b6-9a14-c2a7db666022","Type":"ContainerStarted","Data":"a3d4b38cab61efc11c8ae718ec4bff3ef7f0f4c8d73821666ac12b46ec450a37"} Dec 02 14:05:51 crc kubenswrapper[4900]: I1202 14:05:51.585029 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c5c9338-4abd-4a13-ad8e-af4dfda3309e","Type":"ContainerStarted","Data":"384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562"} Dec 02 14:05:51 crc kubenswrapper[4900]: I1202 14:05:51.585085 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c5c9338-4abd-4a13-ad8e-af4dfda3309e","Type":"ContainerStarted","Data":"745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d"} Dec 02 14:05:51 crc kubenswrapper[4900]: I1202 14:05:51.623378 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.623348908 podStartE2EDuration="2.623348908s" podCreationTimestamp="2025-12-02 14:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:51.603824438 +0000 UTC m=+1397.019638289" watchObservedRunningTime="2025-12-02 14:05:51.623348908 +0000 UTC m=+1397.039162789" Dec 02 14:05:51 crc kubenswrapper[4900]: I1202 14:05:51.644701 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.644634008 podStartE2EDuration="2.644634008s" podCreationTimestamp="2025-12-02 14:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:51.63726816 +0000 UTC m=+1397.053082021" watchObservedRunningTime="2025-12-02 14:05:51.644634008 +0000 UTC m=+1397.060447899" Dec 02 14:05:54 crc kubenswrapper[4900]: I1202 14:05:54.399262 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:05:54 crc kubenswrapper[4900]: I1202 14:05:54.399968 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7d174e45-558a-4540-8ff2-65fbfb554be5" containerName="kube-state-metrics" containerID="cri-o://ec12c1067c3d80232f6837ff0d9be22301ba085b3f5a155a813302dd4bc96097" gracePeriod=30 Dec 02 14:05:54 crc kubenswrapper[4900]: I1202 14:05:54.622235 4900 generic.go:334] "Generic (PLEG): container finished" podID="7d174e45-558a-4540-8ff2-65fbfb554be5" containerID="ec12c1067c3d80232f6837ff0d9be22301ba085b3f5a155a813302dd4bc96097" exitCode=2 Dec 02 14:05:54 crc kubenswrapper[4900]: I1202 14:05:54.622297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d174e45-558a-4540-8ff2-65fbfb554be5","Type":"ContainerDied","Data":"ec12c1067c3d80232f6837ff0d9be22301ba085b3f5a155a813302dd4bc96097"} Dec 02 14:05:54 crc kubenswrapper[4900]: I1202 14:05:54.916011 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.066726 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjnc\" (UniqueName: \"kubernetes.io/projected/7d174e45-558a-4540-8ff2-65fbfb554be5-kube-api-access-xdjnc\") pod \"7d174e45-558a-4540-8ff2-65fbfb554be5\" (UID: \"7d174e45-558a-4540-8ff2-65fbfb554be5\") " Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.090325 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d174e45-558a-4540-8ff2-65fbfb554be5-kube-api-access-xdjnc" (OuterVolumeSpecName: "kube-api-access-xdjnc") pod "7d174e45-558a-4540-8ff2-65fbfb554be5" (UID: "7d174e45-558a-4540-8ff2-65fbfb554be5"). InnerVolumeSpecName "kube-api-access-xdjnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.169081 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjnc\" (UniqueName: \"kubernetes.io/projected/7d174e45-558a-4540-8ff2-65fbfb554be5-kube-api-access-xdjnc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.363869 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.648803 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d174e45-558a-4540-8ff2-65fbfb554be5","Type":"ContainerDied","Data":"91fa9f38dec184672c5784a4f892d333267923543430141305b192c5d4bfd6f3"} Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.648860 4900 scope.go:117] "RemoveContainer" containerID="ec12c1067c3d80232f6837ff0d9be22301ba085b3f5a155a813302dd4bc96097" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.648889 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.684824 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.693450 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.703145 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:05:55 crc kubenswrapper[4900]: E1202 14:05:55.703534 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d174e45-558a-4540-8ff2-65fbfb554be5" containerName="kube-state-metrics" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.703551 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d174e45-558a-4540-8ff2-65fbfb554be5" containerName="kube-state-metrics" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.703743 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d174e45-558a-4540-8ff2-65fbfb554be5" containerName="kube-state-metrics" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.704335 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.707038 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.707124 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.757709 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.784735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.784833 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qdpt\" (UniqueName: \"kubernetes.io/projected/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-api-access-7qdpt\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.784894 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.784917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.887226 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.887324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qdpt\" (UniqueName: \"kubernetes.io/projected/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-api-access-7qdpt\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.887463 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.887492 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.890976 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.891821 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.904374 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:55 crc kubenswrapper[4900]: I1202 14:05:55.904712 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qdpt\" (UniqueName: \"kubernetes.io/projected/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-api-access-7qdpt\") pod \"kube-state-metrics-0\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " pod="openstack/kube-state-metrics-0" Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.027216 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.287978 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.288252 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-central-agent" containerID="cri-o://a78d9229515bde3b62004caeaa0c53e8b6cfd9765abcd5c53f628ac1b354bf0b" gracePeriod=30 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.288330 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="sg-core" containerID="cri-o://198cd9db43c9830f9210862e7a8f1d523c886eb1022f1739cd946ccbf72cb996" gracePeriod=30 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.288380 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-notification-agent" containerID="cri-o://02cf5f9ac713e3835b38492821cd3b1fb09242d8b48c150fec8ab3435bad2043" gracePeriod=30 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.288393 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="proxy-httpd" containerID="cri-o://93760c20b1d3a639d1b551228b0a3e09e86b28dbad6eb889931bbe02bc729130" gracePeriod=30 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.447716 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:05:56 crc kubenswrapper[4900]: W1202 14:05:56.455087 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97e183cf_c0fe_4f94_9c03_7f8fa792c4af.slice/crio-1a4f688d8b636455551170c1dabe6e8bc3e3dee2b38166a2df8d034be44e6bbf WatchSource:0}: Error finding container 1a4f688d8b636455551170c1dabe6e8bc3e3dee2b38166a2df8d034be44e6bbf: Status 404 returned error can't find the container with id 1a4f688d8b636455551170c1dabe6e8bc3e3dee2b38166a2df8d034be44e6bbf Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.661036 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97e183cf-c0fe-4f94-9c03-7f8fa792c4af","Type":"ContainerStarted","Data":"1a4f688d8b636455551170c1dabe6e8bc3e3dee2b38166a2df8d034be44e6bbf"} Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.665714 4900 generic.go:334] "Generic (PLEG): container finished" podID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerID="93760c20b1d3a639d1b551228b0a3e09e86b28dbad6eb889931bbe02bc729130" exitCode=0 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.665740 4900 generic.go:334] "Generic (PLEG): container finished" podID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerID="198cd9db43c9830f9210862e7a8f1d523c886eb1022f1739cd946ccbf72cb996" exitCode=2 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.665746 4900 generic.go:334] "Generic (PLEG): container finished" podID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerID="a78d9229515bde3b62004caeaa0c53e8b6cfd9765abcd5c53f628ac1b354bf0b" exitCode=0 Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.665775 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerDied","Data":"93760c20b1d3a639d1b551228b0a3e09e86b28dbad6eb889931bbe02bc729130"} Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.665796 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerDied","Data":"198cd9db43c9830f9210862e7a8f1d523c886eb1022f1739cd946ccbf72cb996"} Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.665806 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerDied","Data":"a78d9229515bde3b62004caeaa0c53e8b6cfd9765abcd5c53f628ac1b354bf0b"} Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.934224 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d174e45-558a-4540-8ff2-65fbfb554be5" path="/var/lib/kubelet/pods/7d174e45-558a-4540-8ff2-65fbfb554be5/volumes" Dec 02 14:05:56 crc kubenswrapper[4900]: I1202 14:05:56.956583 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 14:05:57 crc kubenswrapper[4900]: I1202 14:05:57.677329 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97e183cf-c0fe-4f94-9c03-7f8fa792c4af","Type":"ContainerStarted","Data":"2cfb4aee9c0e60e8a52f829f41a8643aa512a3a323f16d80fe672fa39d031a24"} Dec 02 14:05:57 crc kubenswrapper[4900]: I1202 14:05:57.677721 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 14:05:57 crc kubenswrapper[4900]: I1202 14:05:57.706814 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.328784568 podStartE2EDuration="2.706790051s" podCreationTimestamp="2025-12-02 14:05:55 +0000 UTC" firstStartedPulling="2025-12-02 14:05:56.458248811 +0000 UTC m=+1401.874062662" lastFinishedPulling="2025-12-02 14:05:56.836254284 +0000 UTC m=+1402.252068145" observedRunningTime="2025-12-02 14:05:57.694495974 +0000 UTC m=+1403.110309825" watchObservedRunningTime="2025-12-02 14:05:57.706790051 +0000 UTC m=+1403.122603912" Dec 02 14:05:58 crc kubenswrapper[4900]: I1202 14:05:58.700437 4900 generic.go:334] "Generic (PLEG): container finished" podID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerID="02cf5f9ac713e3835b38492821cd3b1fb09242d8b48c150fec8ab3435bad2043" exitCode=0 Dec 02 14:05:58 crc kubenswrapper[4900]: I1202 14:05:58.700479 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerDied","Data":"02cf5f9ac713e3835b38492821cd3b1fb09242d8b48c150fec8ab3435bad2043"} Dec 02 14:05:58 crc kubenswrapper[4900]: I1202 14:05:58.970850 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4wkw"] Dec 02 14:05:58 crc kubenswrapper[4900]: I1202 14:05:58.982020 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.015576 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4wkw"] Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.064136 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-utilities\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.064871 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-catalog-content\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.064933 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/0e09a87f-766e-41ef-950d-4603c8b052d1-kube-api-access-r2g64\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.166950 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-catalog-content\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.167016 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/0e09a87f-766e-41ef-950d-4603c8b052d1-kube-api-access-r2g64\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.167066 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-utilities\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.167682 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-utilities\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.167942 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-catalog-content\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.193420 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/0e09a87f-766e-41ef-950d-4603c8b052d1-kube-api-access-r2g64\") pod \"redhat-operators-p4wkw\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.307149 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.375284 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.472456 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-sg-core-conf-yaml\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.472738 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-run-httpd\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.472791 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-scripts\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.472826 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-config-data\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.472850 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-log-httpd\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.472889 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpcj\" (UniqueName: \"kubernetes.io/projected/56c7618b-ae25-4801-a633-003bd8d3c32e-kube-api-access-7kpcj\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.473043 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-combined-ca-bundle\") pod \"56c7618b-ae25-4801-a633-003bd8d3c32e\" (UID: \"56c7618b-ae25-4801-a633-003bd8d3c32e\") " Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.475556 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.475861 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.482563 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c7618b-ae25-4801-a633-003bd8d3c32e-kube-api-access-7kpcj" (OuterVolumeSpecName: "kube-api-access-7kpcj") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "kube-api-access-7kpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.503517 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-scripts" (OuterVolumeSpecName: "scripts") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.521186 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.575766 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.575799 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.575807 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.575816 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56c7618b-ae25-4801-a633-003bd8d3c32e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.575824 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpcj\" (UniqueName: \"kubernetes.io/projected/56c7618b-ae25-4801-a633-003bd8d3c32e-kube-api-access-7kpcj\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.581913 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.591375 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-config-data" (OuterVolumeSpecName: "config-data") pod "56c7618b-ae25-4801-a633-003bd8d3c32e" (UID: "56c7618b-ae25-4801-a633-003bd8d3c32e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.677395 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.677431 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c7618b-ae25-4801-a633-003bd8d3c32e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.710320 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56c7618b-ae25-4801-a633-003bd8d3c32e","Type":"ContainerDied","Data":"3052a039de38b05ede57b85a5e63b44715d992e19c12d26d0b491b0b5d376e45"} Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.710386 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.710591 4900 scope.go:117] "RemoveContainer" containerID="93760c20b1d3a639d1b551228b0a3e09e86b28dbad6eb889931bbe02bc729130" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.734162 4900 scope.go:117] "RemoveContainer" containerID="198cd9db43c9830f9210862e7a8f1d523c886eb1022f1739cd946ccbf72cb996" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.743409 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.753409 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.759671 4900 scope.go:117] "RemoveContainer" containerID="02cf5f9ac713e3835b38492821cd3b1fb09242d8b48c150fec8ab3435bad2043" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.774564 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:59 crc kubenswrapper[4900]: E1202 14:05:59.775000 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="sg-core" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775016 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="sg-core" Dec 02 14:05:59 crc kubenswrapper[4900]: E1202 14:05:59.775030 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="proxy-httpd" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775037 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="proxy-httpd" Dec 02 14:05:59 crc kubenswrapper[4900]: E1202 14:05:59.775060 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-central-agent" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775066 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-central-agent" Dec 02 14:05:59 crc kubenswrapper[4900]: E1202 14:05:59.775093 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-notification-agent" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775099 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-notification-agent" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775271 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="sg-core" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775294 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="proxy-httpd" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775307 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-notification-agent" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.775317 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" containerName="ceilometer-central-agent" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.776928 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.782692 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.782881 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.783023 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.786704 4900 scope.go:117] "RemoveContainer" containerID="a78d9229515bde3b62004caeaa0c53e8b6cfd9765abcd5c53f628ac1b354bf0b" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.798182 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4wkw"] Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.810366 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880067 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-scripts\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880116 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-run-httpd\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880159 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880287 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mk8\" (UniqueName: \"kubernetes.io/projected/0ff8756e-1e52-4215-9a6e-24fccd04935c-kube-api-access-85mk8\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880312 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880331 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-config-data\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880356 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-log-httpd\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.880397 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.970608 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.970667 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.981728 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.981773 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-config-data\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.981806 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-log-httpd\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.981898 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.981959 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-scripts\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.981976 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-run-httpd\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.982007 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.982155 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mk8\" (UniqueName: \"kubernetes.io/projected/0ff8756e-1e52-4215-9a6e-24fccd04935c-kube-api-access-85mk8\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.984285 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-run-httpd\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.989889 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.990005 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-scripts\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.990133 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.990283 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-config-data\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.992523 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-log-httpd\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:05:59 crc kubenswrapper[4900]: I1202 14:05:59.994950 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.005248 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mk8\" (UniqueName: \"kubernetes.io/projected/0ff8756e-1e52-4215-9a6e-24fccd04935c-kube-api-access-85mk8\") pod \"ceilometer-0\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " pod="openstack/ceilometer-0" Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.113636 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.364460 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.425457 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.704907 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:00 crc kubenswrapper[4900]: W1202 14:06:00.711951 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff8756e_1e52_4215_9a6e_24fccd04935c.slice/crio-54bca2e45ce7654833a838822ab88f4f7d6cf2f4ac81f4e8e558ad32b08d5b35 WatchSource:0}: Error finding container 54bca2e45ce7654833a838822ab88f4f7d6cf2f4ac81f4e8e558ad32b08d5b35: Status 404 returned error can't find the container with id 54bca2e45ce7654833a838822ab88f4f7d6cf2f4ac81f4e8e558ad32b08d5b35 Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.724477 4900 generic.go:334] "Generic (PLEG): container finished" podID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerID="9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337" exitCode=0 Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.724542 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerDied","Data":"9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337"} Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.724583 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerStarted","Data":"c4ae532a148827d6dda7214afd2f37834e87fab28389f834bf5cce18d186f3b1"} Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.772143 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 14:06:00 crc kubenswrapper[4900]: I1202 14:06:00.931166 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c7618b-ae25-4801-a633-003bd8d3c32e" path="/var/lib/kubelet/pods/56c7618b-ae25-4801-a633-003bd8d3c32e/volumes" Dec 02 14:06:01 crc kubenswrapper[4900]: I1202 14:06:01.052878 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:01 crc kubenswrapper[4900]: I1202 14:06:01.052926 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:01 crc kubenswrapper[4900]: I1202 14:06:01.740218 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerStarted","Data":"54bca2e45ce7654833a838822ab88f4f7d6cf2f4ac81f4e8e558ad32b08d5b35"} Dec 02 14:06:02 crc kubenswrapper[4900]: I1202 14:06:02.756341 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerStarted","Data":"fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283"} Dec 02 14:06:02 crc kubenswrapper[4900]: I1202 14:06:02.760567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerStarted","Data":"850af502a0be0c61e8403d3045cb5b3a3b0f50a9394a211d2dc4011a8b1794bc"} Dec 02 14:06:03 crc kubenswrapper[4900]: I1202 14:06:03.769459 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerStarted","Data":"7fb2168789e41c8a344c40cd50e3df9fc197e9dbb08a2f7815dd6053df985df9"} Dec 02 14:06:03 crc kubenswrapper[4900]: I1202 14:06:03.773422 4900 generic.go:334] "Generic (PLEG): container finished" podID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerID="fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283" exitCode=0 Dec 02 14:06:03 crc kubenswrapper[4900]: I1202 14:06:03.773457 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerDied","Data":"fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283"} Dec 02 14:06:06 crc kubenswrapper[4900]: I1202 14:06:06.703065 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.812092 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.870136 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerStarted","Data":"1d002374f102bf03a27f49433774508d65b670d849ffbac8069ea153b6387e6c"} Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.873354 4900 generic.go:334] "Generic (PLEG): container finished" podID="fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" containerID="990529831e051ff4684e490bb271b7ec920ff47682a830e299381f6386846e18" exitCode=137 Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.873395 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e","Type":"ContainerDied","Data":"990529831e051ff4684e490bb271b7ec920ff47682a830e299381f6386846e18"} Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.875337 4900 generic.go:334] "Generic (PLEG): container finished" podID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerID="720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082" exitCode=137 Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.875383 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7090d8ce-7baf-4f4f-bfd0-171b1680a843","Type":"ContainerDied","Data":"720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082"} Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.875398 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7090d8ce-7baf-4f4f-bfd0-171b1680a843","Type":"ContainerDied","Data":"fb5583668550941fda56994024a5250f8657f000cf1f0e71a253c37dba7b405b"} Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.875414 4900 scope.go:117] "RemoveContainer" containerID="720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.875531 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.880180 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerStarted","Data":"097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a"} Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.900866 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p4wkw" podStartSLOduration=4.454877096 podStartE2EDuration="10.900845388s" podCreationTimestamp="2025-12-02 14:05:58 +0000 UTC" firstStartedPulling="2025-12-02 14:06:00.726477061 +0000 UTC m=+1406.142290922" lastFinishedPulling="2025-12-02 14:06:07.172445353 +0000 UTC m=+1412.588259214" observedRunningTime="2025-12-02 14:06:08.898904383 +0000 UTC m=+1414.314718244" watchObservedRunningTime="2025-12-02 14:06:08.900845388 +0000 UTC m=+1414.316659239" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.961729 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.968232 4900 scope.go:117] "RemoveContainer" containerID="f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.974103 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-config-data\") pod \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.975900 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-combined-ca-bundle\") pod \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.976101 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7090d8ce-7baf-4f4f-bfd0-171b1680a843-logs\") pod \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.976369 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26pb\" (UniqueName: \"kubernetes.io/projected/7090d8ce-7baf-4f4f-bfd0-171b1680a843-kube-api-access-x26pb\") pod \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\" (UID: \"7090d8ce-7baf-4f4f-bfd0-171b1680a843\") " Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.976704 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7090d8ce-7baf-4f4f-bfd0-171b1680a843-logs" (OuterVolumeSpecName: "logs") pod "7090d8ce-7baf-4f4f-bfd0-171b1680a843" (UID: "7090d8ce-7baf-4f4f-bfd0-171b1680a843"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:08 crc kubenswrapper[4900]: I1202 14:06:08.986963 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7090d8ce-7baf-4f4f-bfd0-171b1680a843-kube-api-access-x26pb" (OuterVolumeSpecName: "kube-api-access-x26pb") pod "7090d8ce-7baf-4f4f-bfd0-171b1680a843" (UID: "7090d8ce-7baf-4f4f-bfd0-171b1680a843"). InnerVolumeSpecName "kube-api-access-x26pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.005919 4900 scope.go:117] "RemoveContainer" containerID="720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082" Dec 02 14:06:09 crc kubenswrapper[4900]: E1202 14:06:09.006354 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082\": container with ID starting with 720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082 not found: ID does not exist" containerID="720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.006389 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082"} err="failed to get container status \"720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082\": rpc error: code = NotFound desc = could not find container \"720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082\": container with ID starting with 720cebac9804229da8ad90974fc1c889a3b6bb266e8bdf1c0f144224bb846082 not found: ID does not exist" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.006430 4900 scope.go:117] "RemoveContainer" containerID="f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018" Dec 02 14:06:09 crc kubenswrapper[4900]: E1202 14:06:09.006777 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018\": container with ID starting with f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018 not found: ID does not exist" containerID="f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.006939 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018"} err="failed to get container status \"f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018\": rpc error: code = NotFound desc = could not find container \"f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018\": container with ID starting with f3a446aed6c19e6c2333eeef7314b9364fb8829355ae47a04f1171d4562af018 not found: ID does not exist" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.025224 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-config-data" (OuterVolumeSpecName: "config-data") pod "7090d8ce-7baf-4f4f-bfd0-171b1680a843" (UID: "7090d8ce-7baf-4f4f-bfd0-171b1680a843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.044970 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7090d8ce-7baf-4f4f-bfd0-171b1680a843" (UID: "7090d8ce-7baf-4f4f-bfd0-171b1680a843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.080718 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-combined-ca-bundle\") pod \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.081012 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kf6k\" (UniqueName: \"kubernetes.io/projected/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-kube-api-access-9kf6k\") pod \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.081177 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-config-data\") pod \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\" (UID: \"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e\") " Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.081873 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7090d8ce-7baf-4f4f-bfd0-171b1680a843-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.081897 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26pb\" (UniqueName: \"kubernetes.io/projected/7090d8ce-7baf-4f4f-bfd0-171b1680a843-kube-api-access-x26pb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.081910 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.081919 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7090d8ce-7baf-4f4f-bfd0-171b1680a843-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.084493 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-kube-api-access-9kf6k" (OuterVolumeSpecName: "kube-api-access-9kf6k") pod "fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" (UID: "fce44d9e-d5f8-4625-b8f2-2c77ff046f7e"). InnerVolumeSpecName "kube-api-access-9kf6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.105473 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-config-data" (OuterVolumeSpecName: "config-data") pod "fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" (UID: "fce44d9e-d5f8-4625-b8f2-2c77ff046f7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.112690 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" (UID: "fce44d9e-d5f8-4625-b8f2-2c77ff046f7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.183371 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kf6k\" (UniqueName: \"kubernetes.io/projected/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-kube-api-access-9kf6k\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.183418 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.183431 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.209683 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.217172 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.232915 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: E1202 14:06:09.233322 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-log" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.233337 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-log" Dec 02 14:06:09 crc kubenswrapper[4900]: E1202 14:06:09.233360 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.233366 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:06:09 crc kubenswrapper[4900]: E1202 14:06:09.233376 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-metadata" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.233382 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-metadata" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.233576 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-metadata" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.233595 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" containerName="nova-metadata-log" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.233604 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.234553 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.241436 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.241590 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.252504 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.307759 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.307810 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.388289 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.388400 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvzx\" (UniqueName: \"kubernetes.io/projected/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-kube-api-access-xrvzx\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.388427 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-config-data\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.388472 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-logs\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.388516 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.489865 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.489965 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrvzx\" (UniqueName: \"kubernetes.io/projected/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-kube-api-access-xrvzx\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.489990 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-config-data\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.490039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-logs\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.490083 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.490568 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-logs\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.495298 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-config-data\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.496050 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.496288 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.508688 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrvzx\" (UniqueName: \"kubernetes.io/projected/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-kube-api-access-xrvzx\") pod \"nova-metadata-0\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.559444 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.904854 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fce44d9e-d5f8-4625-b8f2-2c77ff046f7e","Type":"ContainerDied","Data":"5cbd717e28ca94df63da2152f7a1d6f8d24f7c5d9441fd3a03891ce3d2f5d760"} Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.905383 4900 scope.go:117] "RemoveContainer" containerID="990529831e051ff4684e490bb271b7ec920ff47682a830e299381f6386846e18" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.905428 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.960565 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.978862 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.979520 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.979739 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.981790 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.986940 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.988237 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.989687 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.992803 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.994156 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 14:06:09 crc kubenswrapper[4900]: I1202 14:06:09.996598 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.005837 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.070295 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.106532 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sg5w\" (UniqueName: \"kubernetes.io/projected/ad78a256-27f0-46a9-addb-dbc7b41bebd2-kube-api-access-5sg5w\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.106604 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.106679 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.106744 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.106775 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.208848 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.208913 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.209002 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sg5w\" (UniqueName: \"kubernetes.io/projected/ad78a256-27f0-46a9-addb-dbc7b41bebd2-kube-api-access-5sg5w\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.209025 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.209075 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.213620 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.213890 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.215011 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.217581 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.231159 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sg5w\" (UniqueName: \"kubernetes.io/projected/ad78a256-27f0-46a9-addb-dbc7b41bebd2-kube-api-access-5sg5w\") pod \"nova-cell1-novncproxy-0\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.331165 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.362529 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4wkw" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="registry-server" probeResult="failure" output=< Dec 02 14:06:10 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 14:06:10 crc kubenswrapper[4900]: > Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.960477 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7090d8ce-7baf-4f4f-bfd0-171b1680a843" path="/var/lib/kubelet/pods/7090d8ce-7baf-4f4f-bfd0-171b1680a843/volumes" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.961334 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce44d9e-d5f8-4625-b8f2-2c77ff046f7e" path="/var/lib/kubelet/pods/fce44d9e-d5f8-4625-b8f2-2c77ff046f7e/volumes" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.961767 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ad78a256-27f0-46a9-addb-dbc7b41bebd2","Type":"ContainerStarted","Data":"1a0cfda2f24ef738ef4727e6d1a46c0bdf990f114f0d6688927d0693e3aa874b"} Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.961791 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.961806 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7f63340-62cc-4bcc-a44a-e45b42eb6e60","Type":"ContainerStarted","Data":"b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e"} Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.961816 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7f63340-62cc-4bcc-a44a-e45b42eb6e60","Type":"ContainerStarted","Data":"d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad"} Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.961824 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7f63340-62cc-4bcc-a44a-e45b42eb6e60","Type":"ContainerStarted","Data":"fbaaa44c5dc7e4928be3f689a4a570a7bc32c9b218a7ad8dc8766e19b20322d0"} Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.975406 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerStarted","Data":"823510c7409cad7cc14c768186a6b26e1875b38425e892adb92c68c0124b8fce"} Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.975464 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.976693 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:06:10 crc kubenswrapper[4900]: I1202 14:06:10.998616 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.000724 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.000712512 podStartE2EDuration="2.000712512s" podCreationTimestamp="2025-12-02 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:10.977136418 +0000 UTC m=+1416.392950269" watchObservedRunningTime="2025-12-02 14:06:11.000712512 +0000 UTC m=+1416.416526373" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.011673 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.718005376 podStartE2EDuration="12.011658741s" podCreationTimestamp="2025-12-02 14:05:59 +0000 UTC" firstStartedPulling="2025-12-02 14:06:00.715421069 +0000 UTC m=+1406.131234920" lastFinishedPulling="2025-12-02 14:06:10.009074434 +0000 UTC m=+1415.424888285" observedRunningTime="2025-12-02 14:06:11.009961803 +0000 UTC m=+1416.425775674" watchObservedRunningTime="2025-12-02 14:06:11.011658741 +0000 UTC m=+1416.427472592" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.171795 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mqkt5"] Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.173731 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.204850 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mqkt5"] Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.344048 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.344158 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-config\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.344191 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.344284 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.344312 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.344353 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zfn\" (UniqueName: \"kubernetes.io/projected/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-kube-api-access-85zfn\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.446607 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-config\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.446712 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.446818 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.446867 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.446902 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zfn\" (UniqueName: \"kubernetes.io/projected/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-kube-api-access-85zfn\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.446949 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.447529 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-config\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.447542 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.448171 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.448722 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.448818 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.468105 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zfn\" (UniqueName: \"kubernetes.io/projected/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-kube-api-access-85zfn\") pod \"dnsmasq-dns-cd5cbd7b9-mqkt5\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.497565 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.973407 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mqkt5"] Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.983081 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ad78a256-27f0-46a9-addb-dbc7b41bebd2","Type":"ContainerStarted","Data":"9d4da9c7aa6120d5ccd058c5a050090e9130cfb769ef31b34092dd5d53ce8475"} Dec 02 14:06:11 crc kubenswrapper[4900]: I1202 14:06:11.986336 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" event={"ID":"7d0c0900-1e02-4dec-8e4c-a32f7f560a58","Type":"ContainerStarted","Data":"7f0b2ddc5f13797dc9e5c810843a1614b6e864c69b7ddb9a6d8ad718c7ca8b55"} Dec 02 14:06:12 crc kubenswrapper[4900]: I1202 14:06:12.001187 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.001170811 podStartE2EDuration="3.001170811s" podCreationTimestamp="2025-12-02 14:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:12.00114893 +0000 UTC m=+1417.416962781" watchObservedRunningTime="2025-12-02 14:06:12.001170811 +0000 UTC m=+1417.416984662" Dec 02 14:06:12 crc kubenswrapper[4900]: I1202 14:06:12.997462 4900 generic.go:334] "Generic (PLEG): container finished" podID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerID="6baaeb01e1d3e64c6d9939f8d07d0c37c56a64de01a9109241264a4db766ca79" exitCode=0 Dec 02 14:06:12 crc kubenswrapper[4900]: I1202 14:06:12.997569 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" event={"ID":"7d0c0900-1e02-4dec-8e4c-a32f7f560a58","Type":"ContainerDied","Data":"6baaeb01e1d3e64c6d9939f8d07d0c37c56a64de01a9109241264a4db766ca79"} Dec 02 14:06:13 crc kubenswrapper[4900]: I1202 14:06:13.653670 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:13 crc kubenswrapper[4900]: I1202 14:06:13.654145 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-central-agent" containerID="cri-o://850af502a0be0c61e8403d3045cb5b3a3b0f50a9394a211d2dc4011a8b1794bc" gracePeriod=30 Dec 02 14:06:13 crc kubenswrapper[4900]: I1202 14:06:13.654256 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="sg-core" containerID="cri-o://1d002374f102bf03a27f49433774508d65b670d849ffbac8069ea153b6387e6c" gracePeriod=30 Dec 02 14:06:13 crc kubenswrapper[4900]: I1202 14:06:13.654304 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-notification-agent" containerID="cri-o://7fb2168789e41c8a344c40cd50e3df9fc197e9dbb08a2f7815dd6053df985df9" gracePeriod=30 Dec 02 14:06:13 crc kubenswrapper[4900]: I1202 14:06:13.654253 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="proxy-httpd" containerID="cri-o://823510c7409cad7cc14c768186a6b26e1875b38425e892adb92c68c0124b8fce" gracePeriod=30 Dec 02 14:06:13 crc kubenswrapper[4900]: I1202 14:06:13.805906 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.007866 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" event={"ID":"7d0c0900-1e02-4dec-8e4c-a32f7f560a58","Type":"ContainerStarted","Data":"ba3a70bf8272e8aca22d682efbb037a31975cc819affc7f24193c35cc53da8ce"} Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.008360 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.012104 4900 generic.go:334] "Generic (PLEG): container finished" podID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerID="823510c7409cad7cc14c768186a6b26e1875b38425e892adb92c68c0124b8fce" exitCode=0 Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.012320 4900 generic.go:334] "Generic (PLEG): container finished" podID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerID="1d002374f102bf03a27f49433774508d65b670d849ffbac8069ea153b6387e6c" exitCode=2 Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.012482 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-log" containerID="cri-o://745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d" gracePeriod=30 Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.012553 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-api" containerID="cri-o://384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562" gracePeriod=30 Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.012477 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerDied","Data":"823510c7409cad7cc14c768186a6b26e1875b38425e892adb92c68c0124b8fce"} Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.014819 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerDied","Data":"1d002374f102bf03a27f49433774508d65b670d849ffbac8069ea153b6387e6c"} Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.029112 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" podStartSLOduration=3.029092299 podStartE2EDuration="3.029092299s" podCreationTimestamp="2025-12-02 14:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:14.028115661 +0000 UTC m=+1419.443929512" watchObservedRunningTime="2025-12-02 14:06:14.029092299 +0000 UTC m=+1419.444906150" Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.560677 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:06:14 crc kubenswrapper[4900]: I1202 14:06:14.560726 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:06:15 crc kubenswrapper[4900]: I1202 14:06:15.028536 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerID="745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d" exitCode=143 Dec 02 14:06:15 crc kubenswrapper[4900]: I1202 14:06:15.028634 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c5c9338-4abd-4a13-ad8e-af4dfda3309e","Type":"ContainerDied","Data":"745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d"} Dec 02 14:06:15 crc kubenswrapper[4900]: I1202 14:06:15.034280 4900 generic.go:334] "Generic (PLEG): container finished" podID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerID="850af502a0be0c61e8403d3045cb5b3a3b0f50a9394a211d2dc4011a8b1794bc" exitCode=0 Dec 02 14:06:15 crc kubenswrapper[4900]: I1202 14:06:15.034376 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerDied","Data":"850af502a0be0c61e8403d3045cb5b3a3b0f50a9394a211d2dc4011a8b1794bc"} Dec 02 14:06:15 crc kubenswrapper[4900]: I1202 14:06:15.331273 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.060123 4900 generic.go:334] "Generic (PLEG): container finished" podID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerID="7fb2168789e41c8a344c40cd50e3df9fc197e9dbb08a2f7815dd6053df985df9" exitCode=0 Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.060220 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerDied","Data":"7fb2168789e41c8a344c40cd50e3df9fc197e9dbb08a2f7815dd6053df985df9"} Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.294989 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452221 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-combined-ca-bundle\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452303 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-run-httpd\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452459 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-sg-core-conf-yaml\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452511 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-config-data\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452547 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-scripts\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452624 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-log-httpd\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452690 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85mk8\" (UniqueName: \"kubernetes.io/projected/0ff8756e-1e52-4215-9a6e-24fccd04935c-kube-api-access-85mk8\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452833 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.452961 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-ceilometer-tls-certs\") pod \"0ff8756e-1e52-4215-9a6e-24fccd04935c\" (UID: \"0ff8756e-1e52-4215-9a6e-24fccd04935c\") " Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.453916 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.454494 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.454521 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ff8756e-1e52-4215-9a6e-24fccd04935c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.458081 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-scripts" (OuterVolumeSpecName: "scripts") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.464122 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff8756e-1e52-4215-9a6e-24fccd04935c-kube-api-access-85mk8" (OuterVolumeSpecName: "kube-api-access-85mk8") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "kube-api-access-85mk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.518769 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.531028 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.556990 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.557269 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85mk8\" (UniqueName: \"kubernetes.io/projected/0ff8756e-1e52-4215-9a6e-24fccd04935c-kube-api-access-85mk8\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.557449 4900 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.557623 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.592924 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.622988 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-config-data" (OuterVolumeSpecName: "config-data") pod "0ff8756e-1e52-4215-9a6e-24fccd04935c" (UID: "0ff8756e-1e52-4215-9a6e-24fccd04935c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.659381 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:16 crc kubenswrapper[4900]: I1202 14:06:16.659414 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ff8756e-1e52-4215-9a6e-24fccd04935c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.075407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ff8756e-1e52-4215-9a6e-24fccd04935c","Type":"ContainerDied","Data":"54bca2e45ce7654833a838822ab88f4f7d6cf2f4ac81f4e8e558ad32b08d5b35"} Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.075477 4900 scope.go:117] "RemoveContainer" containerID="823510c7409cad7cc14c768186a6b26e1875b38425e892adb92c68c0124b8fce" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.075503 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.094995 4900 scope.go:117] "RemoveContainer" containerID="1d002374f102bf03a27f49433774508d65b670d849ffbac8069ea153b6387e6c" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.109250 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.114703 4900 scope.go:117] "RemoveContainer" containerID="7fb2168789e41c8a344c40cd50e3df9fc197e9dbb08a2f7815dd6053df985df9" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.123912 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.150346 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:17 crc kubenswrapper[4900]: E1202 14:06:17.151911 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-central-agent" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.151936 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-central-agent" Dec 02 14:06:17 crc kubenswrapper[4900]: E1202 14:06:17.151967 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="proxy-httpd" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.151978 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="proxy-httpd" Dec 02 14:06:17 crc kubenswrapper[4900]: E1202 14:06:17.152023 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-notification-agent" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.152034 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-notification-agent" Dec 02 14:06:17 crc kubenswrapper[4900]: E1202 14:06:17.152067 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="sg-core" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.152075 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="sg-core" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.153227 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="sg-core" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.153263 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-central-agent" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.153290 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="proxy-httpd" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.153329 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" containerName="ceilometer-notification-agent" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.154351 4900 scope.go:117] "RemoveContainer" containerID="850af502a0be0c61e8403d3045cb5b3a3b0f50a9394a211d2dc4011a8b1794bc" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.161419 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.167233 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.167550 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.167798 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.198405 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.274909 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275227 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275254 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-run-httpd\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275305 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-scripts\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275331 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-config-data\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275380 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-log-httpd\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275441 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tg6\" (UniqueName: \"kubernetes.io/projected/533da492-8f1f-4593-86bd-8d5b316bb897-kube-api-access-26tg6\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.275459 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377156 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377202 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-run-httpd\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377240 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-scripts\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377258 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-config-data\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377292 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-log-httpd\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377334 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tg6\" (UniqueName: \"kubernetes.io/projected/533da492-8f1f-4593-86bd-8d5b316bb897-kube-api-access-26tg6\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377351 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.377413 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.378500 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-run-httpd\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.378591 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-log-httpd\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.385230 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.385712 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.386054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.386315 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-scripts\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.397974 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-config-data\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.398289 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tg6\" (UniqueName: \"kubernetes.io/projected/533da492-8f1f-4593-86bd-8d5b316bb897-kube-api-access-26tg6\") pod \"ceilometer-0\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.598497 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.623437 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.783856 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l54n\" (UniqueName: \"kubernetes.io/projected/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-kube-api-access-2l54n\") pod \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.783925 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-logs\") pod \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.783999 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-combined-ca-bundle\") pod \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.784138 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-config-data\") pod \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\" (UID: \"1c5c9338-4abd-4a13-ad8e-af4dfda3309e\") " Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.784579 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-logs" (OuterVolumeSpecName: "logs") pod "1c5c9338-4abd-4a13-ad8e-af4dfda3309e" (UID: "1c5c9338-4abd-4a13-ad8e-af4dfda3309e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.790015 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-kube-api-access-2l54n" (OuterVolumeSpecName: "kube-api-access-2l54n") pod "1c5c9338-4abd-4a13-ad8e-af4dfda3309e" (UID: "1c5c9338-4abd-4a13-ad8e-af4dfda3309e"). InnerVolumeSpecName "kube-api-access-2l54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.816410 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-config-data" (OuterVolumeSpecName: "config-data") pod "1c5c9338-4abd-4a13-ad8e-af4dfda3309e" (UID: "1c5c9338-4abd-4a13-ad8e-af4dfda3309e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.826335 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c5c9338-4abd-4a13-ad8e-af4dfda3309e" (UID: "1c5c9338-4abd-4a13-ad8e-af4dfda3309e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.886092 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.886455 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l54n\" (UniqueName: \"kubernetes.io/projected/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-kube-api-access-2l54n\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.886468 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:17 crc kubenswrapper[4900]: I1202 14:06:17.886476 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5c9338-4abd-4a13-ad8e-af4dfda3309e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.087733 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerID="384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562" exitCode=0 Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.087793 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c5c9338-4abd-4a13-ad8e-af4dfda3309e","Type":"ContainerDied","Data":"384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562"} Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.087820 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c5c9338-4abd-4a13-ad8e-af4dfda3309e","Type":"ContainerDied","Data":"39a698985bea34e0aefcca4bf9bdff44dd5ad0d75e10654e0d09c159017ad094"} Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.087834 4900 scope.go:117] "RemoveContainer" containerID="384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.087936 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: W1202 14:06:18.089918 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533da492_8f1f_4593_86bd_8d5b316bb897.slice/crio-46b5616a1841004184c412c62464ad5d24509666d9d0e701673c2c05b0c2f916 WatchSource:0}: Error finding container 46b5616a1841004184c412c62464ad5d24509666d9d0e701673c2c05b0c2f916: Status 404 returned error can't find the container with id 46b5616a1841004184c412c62464ad5d24509666d9d0e701673c2c05b0c2f916 Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.091576 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.194714 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.206796 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.215679 4900 scope.go:117] "RemoveContainer" containerID="745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.216911 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:18 crc kubenswrapper[4900]: E1202 14:06:18.217278 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-log" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.217293 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-log" Dec 02 14:06:18 crc kubenswrapper[4900]: E1202 14:06:18.217326 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-api" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.217332 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-api" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.217574 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-log" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.217592 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" containerName="nova-api-api" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.218507 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.221562 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.222569 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.222764 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.240582 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.256418 4900 scope.go:117] "RemoveContainer" containerID="384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562" Dec 02 14:06:18 crc kubenswrapper[4900]: E1202 14:06:18.257012 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562\": container with ID starting with 384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562 not found: ID does not exist" containerID="384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.257065 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562"} err="failed to get container status \"384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562\": rpc error: code = NotFound desc = could not find container \"384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562\": container with ID starting with 384b0acbfd227a0accb14e428782abb48d8eb41eda687a3e349b365a76a5e562 not found: ID does not exist" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.257097 4900 scope.go:117] "RemoveContainer" containerID="745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d" Dec 02 14:06:18 crc kubenswrapper[4900]: E1202 14:06:18.257595 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d\": container with ID starting with 745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d not found: ID does not exist" containerID="745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.257633 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d"} err="failed to get container status \"745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d\": rpc error: code = NotFound desc = could not find container \"745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d\": container with ID starting with 745e3760857b405749db627c1e898df10a983ebf70d0416235bf3790a62de65d not found: ID does not exist" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.314557 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.314599 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-public-tls-certs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.314675 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xkpx\" (UniqueName: \"kubernetes.io/projected/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-kube-api-access-6xkpx\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.314834 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-config-data\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.315001 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-logs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.315195 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.416918 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xkpx\" (UniqueName: \"kubernetes.io/projected/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-kube-api-access-6xkpx\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.417006 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-config-data\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.417052 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-logs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.417104 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.417165 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.417187 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-public-tls-certs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.418136 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-logs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.423932 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.423961 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.428145 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-config-data\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.435761 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-public-tls-certs\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.437370 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xkpx\" (UniqueName: \"kubernetes.io/projected/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-kube-api-access-6xkpx\") pod \"nova-api-0\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.536574 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:18 crc kubenswrapper[4900]: W1202 14:06:18.848796 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-7b29b3e1fe384f720fff1655e3c6d105758b5f76ce533491f88b81f359de0a44 WatchSource:0}: Error finding container 7b29b3e1fe384f720fff1655e3c6d105758b5f76ce533491f88b81f359de0a44: Status 404 returned error can't find the container with id 7b29b3e1fe384f720fff1655e3c6d105758b5f76ce533491f88b81f359de0a44 Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.849354 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.936176 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff8756e-1e52-4215-9a6e-24fccd04935c" path="/var/lib/kubelet/pods/0ff8756e-1e52-4215-9a6e-24fccd04935c/volumes" Dec 02 14:06:18 crc kubenswrapper[4900]: I1202 14:06:18.937146 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5c9338-4abd-4a13-ad8e-af4dfda3309e" path="/var/lib/kubelet/pods/1c5c9338-4abd-4a13-ad8e-af4dfda3309e/volumes" Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.102679 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b85e71bb-2c08-4821-adf8-5ab6786c5c9b","Type":"ContainerStarted","Data":"f64c246d511c8a14352f9b1ac27adfc8aa8cde5d1afcaabec0c6714db0d8a8e3"} Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.102741 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b85e71bb-2c08-4821-adf8-5ab6786c5c9b","Type":"ContainerStarted","Data":"7b29b3e1fe384f720fff1655e3c6d105758b5f76ce533491f88b81f359de0a44"} Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.105561 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerStarted","Data":"8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a"} Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.106244 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerStarted","Data":"46b5616a1841004184c412c62464ad5d24509666d9d0e701673c2c05b0c2f916"} Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.378945 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.456375 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.560951 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.560994 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:06:19 crc kubenswrapper[4900]: I1202 14:06:19.620917 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4wkw"] Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.117774 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b85e71bb-2c08-4821-adf8-5ab6786c5c9b","Type":"ContainerStarted","Data":"bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db"} Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.122166 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerStarted","Data":"75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f"} Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.157238 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.157219741 podStartE2EDuration="2.157219741s" podCreationTimestamp="2025-12-02 14:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:20.151281904 +0000 UTC m=+1425.567095765" watchObservedRunningTime="2025-12-02 14:06:20.157219741 +0000 UTC m=+1425.573033612" Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.332017 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.363789 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.578891 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:20 crc kubenswrapper[4900]: I1202 14:06:20.578975 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.131043 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p4wkw" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="registry-server" containerID="cri-o://097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a" gracePeriod=2 Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.151599 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.333787 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-fsfpg"] Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.335903 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.337780 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.340822 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.352777 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fsfpg"] Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.379972 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.380332 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-scripts\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.380449 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ntx\" (UniqueName: \"kubernetes.io/projected/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-kube-api-access-82ntx\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.380533 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-config-data\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.482265 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ntx\" (UniqueName: \"kubernetes.io/projected/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-kube-api-access-82ntx\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.482323 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-config-data\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.482421 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.482514 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-scripts\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.490248 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-scripts\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.491790 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-config-data\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.499810 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.500732 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.506952 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ntx\" (UniqueName: \"kubernetes.io/projected/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-kube-api-access-82ntx\") pod \"nova-cell1-cell-mapping-fsfpg\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.571054 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-n8fgk"] Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.578075 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" podUID="1cc70359-25b1-45d9-a530-5204a265158e" containerName="dnsmasq-dns" containerID="cri-o://238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115" gracePeriod=10 Dec 02 14:06:21 crc kubenswrapper[4900]: I1202 14:06:21.658832 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.036728 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.091590 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101108 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-svc\") pod \"1cc70359-25b1-45d9-a530-5204a265158e\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101183 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-catalog-content\") pod \"0e09a87f-766e-41ef-950d-4603c8b052d1\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101280 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/0e09a87f-766e-41ef-950d-4603c8b052d1-kube-api-access-r2g64\") pod \"0e09a87f-766e-41ef-950d-4603c8b052d1\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101373 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44rqr\" (UniqueName: \"kubernetes.io/projected/1cc70359-25b1-45d9-a530-5204a265158e-kube-api-access-44rqr\") pod \"1cc70359-25b1-45d9-a530-5204a265158e\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101400 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-sb\") pod \"1cc70359-25b1-45d9-a530-5204a265158e\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101430 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-config\") pod \"1cc70359-25b1-45d9-a530-5204a265158e\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101467 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-utilities\") pod \"0e09a87f-766e-41ef-950d-4603c8b052d1\" (UID: \"0e09a87f-766e-41ef-950d-4603c8b052d1\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101539 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-swift-storage-0\") pod \"1cc70359-25b1-45d9-a530-5204a265158e\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.101561 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-nb\") pod \"1cc70359-25b1-45d9-a530-5204a265158e\" (UID: \"1cc70359-25b1-45d9-a530-5204a265158e\") " Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.105079 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-utilities" (OuterVolumeSpecName: "utilities") pod "0e09a87f-766e-41ef-950d-4603c8b052d1" (UID: "0e09a87f-766e-41ef-950d-4603c8b052d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.118086 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e09a87f-766e-41ef-950d-4603c8b052d1-kube-api-access-r2g64" (OuterVolumeSpecName: "kube-api-access-r2g64") pod "0e09a87f-766e-41ef-950d-4603c8b052d1" (UID: "0e09a87f-766e-41ef-950d-4603c8b052d1"). InnerVolumeSpecName "kube-api-access-r2g64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.118201 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc70359-25b1-45d9-a530-5204a265158e-kube-api-access-44rqr" (OuterVolumeSpecName: "kube-api-access-44rqr") pod "1cc70359-25b1-45d9-a530-5204a265158e" (UID: "1cc70359-25b1-45d9-a530-5204a265158e"). InnerVolumeSpecName "kube-api-access-44rqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.146323 4900 generic.go:334] "Generic (PLEG): container finished" podID="1cc70359-25b1-45d9-a530-5204a265158e" containerID="238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115" exitCode=0 Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.146380 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" event={"ID":"1cc70359-25b1-45d9-a530-5204a265158e","Type":"ContainerDied","Data":"238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115"} Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.146407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" event={"ID":"1cc70359-25b1-45d9-a530-5204a265158e","Type":"ContainerDied","Data":"a03ebadfc03596fec6bd639ecbd19ef1cc65f4fef56fd1758a1ff370e1e3d2d3"} Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.146424 4900 scope.go:117] "RemoveContainer" containerID="238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.146539 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-n8fgk" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.151567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerStarted","Data":"2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4"} Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.154359 4900 generic.go:334] "Generic (PLEG): container finished" podID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerID="097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a" exitCode=0 Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.155167 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4wkw" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.155574 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerDied","Data":"097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a"} Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.155595 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4wkw" event={"ID":"0e09a87f-766e-41ef-950d-4603c8b052d1","Type":"ContainerDied","Data":"c4ae532a148827d6dda7214afd2f37834e87fab28389f834bf5cce18d186f3b1"} Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.178481 4900 scope.go:117] "RemoveContainer" containerID="553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.196627 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cc70359-25b1-45d9-a530-5204a265158e" (UID: "1cc70359-25b1-45d9-a530-5204a265158e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.197088 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cc70359-25b1-45d9-a530-5204a265158e" (UID: "1cc70359-25b1-45d9-a530-5204a265158e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.197372 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cc70359-25b1-45d9-a530-5204a265158e" (UID: "1cc70359-25b1-45d9-a530-5204a265158e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.197719 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-config" (OuterVolumeSpecName: "config") pod "1cc70359-25b1-45d9-a530-5204a265158e" (UID: "1cc70359-25b1-45d9-a530-5204a265158e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.198144 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1cc70359-25b1-45d9-a530-5204a265158e" (UID: "1cc70359-25b1-45d9-a530-5204a265158e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205167 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205199 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/0e09a87f-766e-41ef-950d-4603c8b052d1-kube-api-access-r2g64\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205212 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44rqr\" (UniqueName: \"kubernetes.io/projected/1cc70359-25b1-45d9-a530-5204a265158e-kube-api-access-44rqr\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205221 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205313 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205325 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205334 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.205343 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cc70359-25b1-45d9-a530-5204a265158e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.236144 4900 scope.go:117] "RemoveContainer" containerID="238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115" Dec 02 14:06:22 crc kubenswrapper[4900]: E1202 14:06:22.236616 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115\": container with ID starting with 238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115 not found: ID does not exist" containerID="238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.237595 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115"} err="failed to get container status \"238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115\": rpc error: code = NotFound desc = could not find container \"238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115\": container with ID starting with 238e6143b200ffa55ad7a469988a87721d59ef9a0b542be8310ca42142c7b115 not found: ID does not exist" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.237630 4900 scope.go:117] "RemoveContainer" containerID="553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd" Dec 02 14:06:22 crc kubenswrapper[4900]: E1202 14:06:22.238032 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd\": container with ID starting with 553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd not found: ID does not exist" containerID="553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.238103 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd"} err="failed to get container status \"553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd\": rpc error: code = NotFound desc = could not find container \"553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd\": container with ID starting with 553cafd7ab91415e31d3b2382fe144ec7b02000846966aa6b1300a7fd6d087dd not found: ID does not exist" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.238131 4900 scope.go:117] "RemoveContainer" containerID="097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.250713 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fsfpg"] Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.260303 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e09a87f-766e-41ef-950d-4603c8b052d1" (UID: "0e09a87f-766e-41ef-950d-4603c8b052d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.307264 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e09a87f-766e-41ef-950d-4603c8b052d1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.453789 4900 scope.go:117] "RemoveContainer" containerID="fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.588430 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-n8fgk"] Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.592262 4900 scope.go:117] "RemoveContainer" containerID="9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.595521 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-n8fgk"] Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.614283 4900 scope.go:117] "RemoveContainer" containerID="097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a" Dec 02 14:06:22 crc kubenswrapper[4900]: E1202 14:06:22.614799 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a\": container with ID starting with 097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a not found: ID does not exist" containerID="097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.614877 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a"} err="failed to get container status \"097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a\": rpc error: code = NotFound desc = could not find container \"097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a\": container with ID starting with 097e558f95bc62ab6924852757b84906c7318a914acb689ba4c8dba34190df2a not found: ID does not exist" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.614905 4900 scope.go:117] "RemoveContainer" containerID="fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283" Dec 02 14:06:22 crc kubenswrapper[4900]: E1202 14:06:22.615176 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283\": container with ID starting with fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283 not found: ID does not exist" containerID="fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.615216 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283"} err="failed to get container status \"fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283\": rpc error: code = NotFound desc = could not find container \"fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283\": container with ID starting with fd81164e790f627f74b1558863ad62dbe0eeefc61d495431367aba13e8930283 not found: ID does not exist" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.615229 4900 scope.go:117] "RemoveContainer" containerID="9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337" Dec 02 14:06:22 crc kubenswrapper[4900]: E1202 14:06:22.615445 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337\": container with ID starting with 9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337 not found: ID does not exist" containerID="9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.615461 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337"} err="failed to get container status \"9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337\": rpc error: code = NotFound desc = could not find container \"9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337\": container with ID starting with 9547d32b5ade522a00c12365cd4456c764aa3b576ac7ec7f6fe47627f8fb7337 not found: ID does not exist" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.623386 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4wkw"] Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.632122 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p4wkw"] Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.927920 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" path="/var/lib/kubelet/pods/0e09a87f-766e-41ef-950d-4603c8b052d1/volumes" Dec 02 14:06:22 crc kubenswrapper[4900]: I1202 14:06:22.929231 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc70359-25b1-45d9-a530-5204a265158e" path="/var/lib/kubelet/pods/1cc70359-25b1-45d9-a530-5204a265158e/volumes" Dec 02 14:06:23 crc kubenswrapper[4900]: I1202 14:06:23.169253 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fsfpg" event={"ID":"d3cec85c-cdfd-4f4c-bae9-629ab14653a4","Type":"ContainerStarted","Data":"8a82c0a1aec9e86de7bbdef845e617555b59d19b58b5b8cd0eeb0e65e22aae77"} Dec 02 14:06:23 crc kubenswrapper[4900]: I1202 14:06:23.169306 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fsfpg" event={"ID":"d3cec85c-cdfd-4f4c-bae9-629ab14653a4","Type":"ContainerStarted","Data":"5bbc36fb0334c0064dc04aaffccdef304225ec0bc64b7aa05ff1a26339f309a9"} Dec 02 14:06:23 crc kubenswrapper[4900]: I1202 14:06:23.191163 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-fsfpg" podStartSLOduration=2.191144582 podStartE2EDuration="2.191144582s" podCreationTimestamp="2025-12-02 14:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:23.187884811 +0000 UTC m=+1428.603698672" watchObservedRunningTime="2025-12-02 14:06:23.191144582 +0000 UTC m=+1428.606958443" Dec 02 14:06:25 crc kubenswrapper[4900]: I1202 14:06:25.205126 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerStarted","Data":"5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8"} Dec 02 14:06:25 crc kubenswrapper[4900]: I1202 14:06:25.206917 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:06:25 crc kubenswrapper[4900]: I1202 14:06:25.241360 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.029128577 podStartE2EDuration="8.24133577s" podCreationTimestamp="2025-12-02 14:06:17 +0000 UTC" firstStartedPulling="2025-12-02 14:06:18.092690142 +0000 UTC m=+1423.508503993" lastFinishedPulling="2025-12-02 14:06:24.304897325 +0000 UTC m=+1429.720711186" observedRunningTime="2025-12-02 14:06:25.231994878 +0000 UTC m=+1430.647808759" watchObservedRunningTime="2025-12-02 14:06:25.24133577 +0000 UTC m=+1430.657149631" Dec 02 14:06:28 crc kubenswrapper[4900]: I1202 14:06:28.250975 4900 generic.go:334] "Generic (PLEG): container finished" podID="d3cec85c-cdfd-4f4c-bae9-629ab14653a4" containerID="8a82c0a1aec9e86de7bbdef845e617555b59d19b58b5b8cd0eeb0e65e22aae77" exitCode=0 Dec 02 14:06:28 crc kubenswrapper[4900]: I1202 14:06:28.251109 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fsfpg" event={"ID":"d3cec85c-cdfd-4f4c-bae9-629ab14653a4","Type":"ContainerDied","Data":"8a82c0a1aec9e86de7bbdef845e617555b59d19b58b5b8cd0eeb0e65e22aae77"} Dec 02 14:06:28 crc kubenswrapper[4900]: I1202 14:06:28.537602 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:06:28 crc kubenswrapper[4900]: I1202 14:06:28.538064 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.567871 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.569821 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.595022 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.595470 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.600387 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.600811 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.728482 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.881895 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82ntx\" (UniqueName: \"kubernetes.io/projected/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-kube-api-access-82ntx\") pod \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.881953 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-combined-ca-bundle\") pod \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.882074 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-config-data\") pod \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.882199 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-scripts\") pod \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\" (UID: \"d3cec85c-cdfd-4f4c-bae9-629ab14653a4\") " Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.888217 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-kube-api-access-82ntx" (OuterVolumeSpecName: "kube-api-access-82ntx") pod "d3cec85c-cdfd-4f4c-bae9-629ab14653a4" (UID: "d3cec85c-cdfd-4f4c-bae9-629ab14653a4"). InnerVolumeSpecName "kube-api-access-82ntx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.893016 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-scripts" (OuterVolumeSpecName: "scripts") pod "d3cec85c-cdfd-4f4c-bae9-629ab14653a4" (UID: "d3cec85c-cdfd-4f4c-bae9-629ab14653a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.914834 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-config-data" (OuterVolumeSpecName: "config-data") pod "d3cec85c-cdfd-4f4c-bae9-629ab14653a4" (UID: "d3cec85c-cdfd-4f4c-bae9-629ab14653a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.923751 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3cec85c-cdfd-4f4c-bae9-629ab14653a4" (UID: "d3cec85c-cdfd-4f4c-bae9-629ab14653a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.983733 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.983760 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82ntx\" (UniqueName: \"kubernetes.io/projected/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-kube-api-access-82ntx\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.983770 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:29 crc kubenswrapper[4900]: I1202 14:06:29.983780 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cec85c-cdfd-4f4c-bae9-629ab14653a4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.283361 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fsfpg" Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.283701 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fsfpg" event={"ID":"d3cec85c-cdfd-4f4c-bae9-629ab14653a4","Type":"ContainerDied","Data":"5bbc36fb0334c0064dc04aaffccdef304225ec0bc64b7aa05ff1a26339f309a9"} Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.283762 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bbc36fb0334c0064dc04aaffccdef304225ec0bc64b7aa05ff1a26339f309a9" Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.474011 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.474600 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-log" containerID="cri-o://f64c246d511c8a14352f9b1ac27adfc8aa8cde5d1afcaabec0c6714db0d8a8e3" gracePeriod=30 Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.476336 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-api" containerID="cri-o://bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db" gracePeriod=30 Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.483107 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.483312 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="16489417-050d-44b6-9a14-c2a7db666022" containerName="nova-scheduler-scheduler" containerID="cri-o://02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7" gracePeriod=30 Dec 02 14:06:30 crc kubenswrapper[4900]: I1202 14:06:30.537129 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:31 crc kubenswrapper[4900]: I1202 14:06:31.297009 4900 generic.go:334] "Generic (PLEG): container finished" podID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerID="f64c246d511c8a14352f9b1ac27adfc8aa8cde5d1afcaabec0c6714db0d8a8e3" exitCode=143 Dec 02 14:06:31 crc kubenswrapper[4900]: I1202 14:06:31.297117 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b85e71bb-2c08-4821-adf8-5ab6786c5c9b","Type":"ContainerDied","Data":"f64c246d511c8a14352f9b1ac27adfc8aa8cde5d1afcaabec0c6714db0d8a8e3"} Dec 02 14:06:32 crc kubenswrapper[4900]: I1202 14:06:32.313177 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-metadata" containerID="cri-o://b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e" gracePeriod=30 Dec 02 14:06:32 crc kubenswrapper[4900]: I1202 14:06:32.313661 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-log" containerID="cri-o://d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad" gracePeriod=30 Dec 02 14:06:33 crc kubenswrapper[4900]: I1202 14:06:33.328522 4900 generic.go:334] "Generic (PLEG): container finished" podID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerID="d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad" exitCode=143 Dec 02 14:06:33 crc kubenswrapper[4900]: I1202 14:06:33.328584 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7f63340-62cc-4bcc-a44a-e45b42eb6e60","Type":"ContainerDied","Data":"d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad"} Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.074150 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.231376 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsg6l\" (UniqueName: \"kubernetes.io/projected/16489417-050d-44b6-9a14-c2a7db666022-kube-api-access-qsg6l\") pod \"16489417-050d-44b6-9a14-c2a7db666022\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.231665 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-config-data\") pod \"16489417-050d-44b6-9a14-c2a7db666022\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.231702 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-combined-ca-bundle\") pod \"16489417-050d-44b6-9a14-c2a7db666022\" (UID: \"16489417-050d-44b6-9a14-c2a7db666022\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.237103 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16489417-050d-44b6-9a14-c2a7db666022-kube-api-access-qsg6l" (OuterVolumeSpecName: "kube-api-access-qsg6l") pod "16489417-050d-44b6-9a14-c2a7db666022" (UID: "16489417-050d-44b6-9a14-c2a7db666022"). InnerVolumeSpecName "kube-api-access-qsg6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.280578 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16489417-050d-44b6-9a14-c2a7db666022" (UID: "16489417-050d-44b6-9a14-c2a7db666022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.287923 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-config-data" (OuterVolumeSpecName: "config-data") pod "16489417-050d-44b6-9a14-c2a7db666022" (UID: "16489417-050d-44b6-9a14-c2a7db666022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.334165 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.334205 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16489417-050d-44b6-9a14-c2a7db666022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.334219 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsg6l\" (UniqueName: \"kubernetes.io/projected/16489417-050d-44b6-9a14-c2a7db666022-kube-api-access-qsg6l\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.357679 4900 generic.go:334] "Generic (PLEG): container finished" podID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerID="bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db" exitCode=0 Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.357802 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b85e71bb-2c08-4821-adf8-5ab6786c5c9b","Type":"ContainerDied","Data":"bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db"} Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.361134 4900 generic.go:334] "Generic (PLEG): container finished" podID="16489417-050d-44b6-9a14-c2a7db666022" containerID="02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7" exitCode=0 Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.361170 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16489417-050d-44b6-9a14-c2a7db666022","Type":"ContainerDied","Data":"02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7"} Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.361193 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16489417-050d-44b6-9a14-c2a7db666022","Type":"ContainerDied","Data":"a3d4b38cab61efc11c8ae718ec4bff3ef7f0f4c8d73821666ac12b46ec450a37"} Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.361208 4900 scope.go:117] "RemoveContainer" containerID="02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.361345 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.388758 4900 scope.go:117] "RemoveContainer" containerID="02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.393167 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7\": container with ID starting with 02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7 not found: ID does not exist" containerID="02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.393213 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7"} err="failed to get container status \"02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7\": rpc error: code = NotFound desc = could not find container \"02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7\": container with ID starting with 02b7c5eaa1177e743594f656cdaf6426c1cc29e5ce7b00925c38b42dd9c9d9a7 not found: ID does not exist" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.394439 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.403797 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.410433 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.420873 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421509 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc70359-25b1-45d9-a530-5204a265158e" containerName="dnsmasq-dns" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421523 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc70359-25b1-45d9-a530-5204a265158e" containerName="dnsmasq-dns" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421560 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16489417-050d-44b6-9a14-c2a7db666022" containerName="nova-scheduler-scheduler" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421567 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="16489417-050d-44b6-9a14-c2a7db666022" containerName="nova-scheduler-scheduler" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421581 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="extract-utilities" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421587 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="extract-utilities" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421598 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-api" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421605 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-api" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421664 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="registry-server" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421673 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="registry-server" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421693 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cec85c-cdfd-4f4c-bae9-629ab14653a4" containerName="nova-manage" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421699 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cec85c-cdfd-4f4c-bae9-629ab14653a4" containerName="nova-manage" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421709 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="extract-content" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421715 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="extract-content" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421721 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-log" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421727 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-log" Dec 02 14:06:35 crc kubenswrapper[4900]: E1202 14:06:35.421742 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc70359-25b1-45d9-a530-5204a265158e" containerName="init" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421748 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc70359-25b1-45d9-a530-5204a265158e" containerName="init" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421924 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e09a87f-766e-41ef-950d-4603c8b052d1" containerName="registry-server" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421948 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="16489417-050d-44b6-9a14-c2a7db666022" containerName="nova-scheduler-scheduler" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421957 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-log" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421963 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc70359-25b1-45d9-a530-5204a265158e" containerName="dnsmasq-dns" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421971 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cec85c-cdfd-4f4c-bae9-629ab14653a4" containerName="nova-manage" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.421983 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" containerName="nova-api-api" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.428407 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.436596 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.448056 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.539618 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-internal-tls-certs\") pod \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.539756 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-combined-ca-bundle\") pod \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.539827 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-logs\") pod \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.539869 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xkpx\" (UniqueName: \"kubernetes.io/projected/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-kube-api-access-6xkpx\") pod \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.539921 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-public-tls-certs\") pod \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.539962 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-config-data\") pod \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\" (UID: \"b85e71bb-2c08-4821-adf8-5ab6786c5c9b\") " Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.540259 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvj92\" (UniqueName: \"kubernetes.io/projected/ed8121d7-7b10-44c5-9b43-9088b198f34c-kube-api-access-cvj92\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.540304 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-logs" (OuterVolumeSpecName: "logs") pod "b85e71bb-2c08-4821-adf8-5ab6786c5c9b" (UID: "b85e71bb-2c08-4821-adf8-5ab6786c5c9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.540333 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.540463 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-config-data\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.540834 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.544014 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-kube-api-access-6xkpx" (OuterVolumeSpecName: "kube-api-access-6xkpx") pod "b85e71bb-2c08-4821-adf8-5ab6786c5c9b" (UID: "b85e71bb-2c08-4821-adf8-5ab6786c5c9b"). InnerVolumeSpecName "kube-api-access-6xkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.573281 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-config-data" (OuterVolumeSpecName: "config-data") pod "b85e71bb-2c08-4821-adf8-5ab6786c5c9b" (UID: "b85e71bb-2c08-4821-adf8-5ab6786c5c9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.577664 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b85e71bb-2c08-4821-adf8-5ab6786c5c9b" (UID: "b85e71bb-2c08-4821-adf8-5ab6786c5c9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.589843 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b85e71bb-2c08-4821-adf8-5ab6786c5c9b" (UID: "b85e71bb-2c08-4821-adf8-5ab6786c5c9b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.590636 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b85e71bb-2c08-4821-adf8-5ab6786c5c9b" (UID: "b85e71bb-2c08-4821-adf8-5ab6786c5c9b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.625228 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:39930->10.217.0.194:8775: read: connection reset by peer" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.625301 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:39932->10.217.0.194:8775: read: connection reset by peer" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.642860 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvj92\" (UniqueName: \"kubernetes.io/projected/ed8121d7-7b10-44c5-9b43-9088b198f34c-kube-api-access-cvj92\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.642933 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.643002 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-config-data\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.643100 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.643112 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.643120 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xkpx\" (UniqueName: \"kubernetes.io/projected/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-kube-api-access-6xkpx\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.643132 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.643141 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85e71bb-2c08-4821-adf8-5ab6786c5c9b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.647783 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-config-data\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.648556 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.662111 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvj92\" (UniqueName: \"kubernetes.io/projected/ed8121d7-7b10-44c5-9b43-9088b198f34c-kube-api-access-cvj92\") pod \"nova-scheduler-0\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " pod="openstack/nova-scheduler-0" Dec 02 14:06:35 crc kubenswrapper[4900]: I1202 14:06:35.759785 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.200897 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.254803 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-combined-ca-bundle\") pod \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.255005 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-logs\") pod \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.255103 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrvzx\" (UniqueName: \"kubernetes.io/projected/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-kube-api-access-xrvzx\") pod \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.255283 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-nova-metadata-tls-certs\") pod \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.255322 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-config-data\") pod \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\" (UID: \"c7f63340-62cc-4bcc-a44a-e45b42eb6e60\") " Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.256151 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-logs" (OuterVolumeSpecName: "logs") pod "c7f63340-62cc-4bcc-a44a-e45b42eb6e60" (UID: "c7f63340-62cc-4bcc-a44a-e45b42eb6e60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.264890 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-kube-api-access-xrvzx" (OuterVolumeSpecName: "kube-api-access-xrvzx") pod "c7f63340-62cc-4bcc-a44a-e45b42eb6e60" (UID: "c7f63340-62cc-4bcc-a44a-e45b42eb6e60"). InnerVolumeSpecName "kube-api-access-xrvzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.281438 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7f63340-62cc-4bcc-a44a-e45b42eb6e60" (UID: "c7f63340-62cc-4bcc-a44a-e45b42eb6e60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.304798 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-config-data" (OuterVolumeSpecName: "config-data") pod "c7f63340-62cc-4bcc-a44a-e45b42eb6e60" (UID: "c7f63340-62cc-4bcc-a44a-e45b42eb6e60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.333115 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c7f63340-62cc-4bcc-a44a-e45b42eb6e60" (UID: "c7f63340-62cc-4bcc-a44a-e45b42eb6e60"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.357512 4900 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.357559 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.357573 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.357585 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.357597 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrvzx\" (UniqueName: \"kubernetes.io/projected/c7f63340-62cc-4bcc-a44a-e45b42eb6e60-kube-api-access-xrvzx\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.374471 4900 generic.go:334] "Generic (PLEG): container finished" podID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerID="b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e" exitCode=0 Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.374542 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.374577 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7f63340-62cc-4bcc-a44a-e45b42eb6e60","Type":"ContainerDied","Data":"b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e"} Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.374704 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7f63340-62cc-4bcc-a44a-e45b42eb6e60","Type":"ContainerDied","Data":"fbaaa44c5dc7e4928be3f689a4a570a7bc32c9b218a7ad8dc8766e19b20322d0"} Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.374731 4900 scope.go:117] "RemoveContainer" containerID="b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.383199 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b85e71bb-2c08-4821-adf8-5ab6786c5c9b","Type":"ContainerDied","Data":"7b29b3e1fe384f720fff1655e3c6d105758b5f76ce533491f88b81f359de0a44"} Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.383322 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.385219 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.399713 4900 scope.go:117] "RemoveContainer" containerID="d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.428355 4900 scope.go:117] "RemoveContainer" containerID="b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e" Dec 02 14:06:36 crc kubenswrapper[4900]: E1202 14:06:36.428912 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e\": container with ID starting with b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e not found: ID does not exist" containerID="b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.428964 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e"} err="failed to get container status \"b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e\": rpc error: code = NotFound desc = could not find container \"b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e\": container with ID starting with b48cda869f4dd78ded21c899f001c10b4e67db97b8b1021e14c447a15291de6e not found: ID does not exist" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.428996 4900 scope.go:117] "RemoveContainer" containerID="d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad" Dec 02 14:06:36 crc kubenswrapper[4900]: E1202 14:06:36.429399 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad\": container with ID starting with d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad not found: ID does not exist" containerID="d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.429426 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad"} err="failed to get container status \"d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad\": rpc error: code = NotFound desc = could not find container \"d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad\": container with ID starting with d97848ae042281121f2921e8a5254d1247c86743b79d35a210b281cd564626ad not found: ID does not exist" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.429444 4900 scope.go:117] "RemoveContainer" containerID="bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.431611 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.444959 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.460818 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.475499 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.478846 4900 scope.go:117] "RemoveContainer" containerID="f64c246d511c8a14352f9b1ac27adfc8aa8cde5d1afcaabec0c6714db0d8a8e3" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.488151 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: E1202 14:06:36.488671 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-metadata" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.488698 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-metadata" Dec 02 14:06:36 crc kubenswrapper[4900]: E1202 14:06:36.488727 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-log" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.488737 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-log" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.489025 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-log" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.489072 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" containerName="nova-metadata-metadata" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.490553 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.497578 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.497987 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.499655 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.500162 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.502048 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.503974 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.504287 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.505736 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.516022 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.662759 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.662856 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b58h\" (UniqueName: \"kubernetes.io/projected/5624f474-dd54-4580-b816-f238cc733b5a-kube-api-access-7b58h\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.662889 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.662996 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a322a0-752b-4ab1-9418-41c4747eebee-logs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663082 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-config-data\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663151 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-config-data\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663218 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663260 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h58v\" (UniqueName: \"kubernetes.io/projected/f8a322a0-752b-4ab1-9418-41c4747eebee-kube-api-access-7h58v\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663319 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5624f474-dd54-4580-b816-f238cc733b5a-logs\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663350 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.663439 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-public-tls-certs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766158 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766288 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b58h\" (UniqueName: \"kubernetes.io/projected/5624f474-dd54-4580-b816-f238cc733b5a-kube-api-access-7b58h\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766345 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766421 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a322a0-752b-4ab1-9418-41c4747eebee-logs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766529 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-config-data\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766605 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-config-data\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766721 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766801 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h58v\" (UniqueName: \"kubernetes.io/projected/f8a322a0-752b-4ab1-9418-41c4747eebee-kube-api-access-7h58v\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766878 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5624f474-dd54-4580-b816-f238cc733b5a-logs\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.766942 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.767072 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-public-tls-certs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.768040 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a322a0-752b-4ab1-9418-41c4747eebee-logs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.768215 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5624f474-dd54-4580-b816-f238cc733b5a-logs\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.771683 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.772907 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.773210 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.774036 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-public-tls-certs\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.774566 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-config-data\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.774589 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-config-data\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.774895 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.788180 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b58h\" (UniqueName: \"kubernetes.io/projected/5624f474-dd54-4580-b816-f238cc733b5a-kube-api-access-7b58h\") pod \"nova-metadata-0\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.790993 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h58v\" (UniqueName: \"kubernetes.io/projected/f8a322a0-752b-4ab1-9418-41c4747eebee-kube-api-access-7h58v\") pod \"nova-api-0\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.875229 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.890164 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.962690 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16489417-050d-44b6-9a14-c2a7db666022" path="/var/lib/kubelet/pods/16489417-050d-44b6-9a14-c2a7db666022/volumes" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.964306 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b85e71bb-2c08-4821-adf8-5ab6786c5c9b" path="/var/lib/kubelet/pods/b85e71bb-2c08-4821-adf8-5ab6786c5c9b/volumes" Dec 02 14:06:36 crc kubenswrapper[4900]: I1202 14:06:36.965366 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f63340-62cc-4bcc-a44a-e45b42eb6e60" path="/var/lib/kubelet/pods/c7f63340-62cc-4bcc-a44a-e45b42eb6e60/volumes" Dec 02 14:06:37 crc kubenswrapper[4900]: I1202 14:06:37.327362 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:06:37 crc kubenswrapper[4900]: W1202 14:06:37.330401 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5624f474_dd54_4580_b816_f238cc733b5a.slice/crio-2b959af286e31deabd35f3b4615973d13c65c11a556390fda2c9977139897d1a WatchSource:0}: Error finding container 2b959af286e31deabd35f3b4615973d13c65c11a556390fda2c9977139897d1a: Status 404 returned error can't find the container with id 2b959af286e31deabd35f3b4615973d13c65c11a556390fda2c9977139897d1a Dec 02 14:06:37 crc kubenswrapper[4900]: I1202 14:06:37.402970 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed8121d7-7b10-44c5-9b43-9088b198f34c","Type":"ContainerStarted","Data":"88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f"} Dec 02 14:06:37 crc kubenswrapper[4900]: I1202 14:06:37.403019 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed8121d7-7b10-44c5-9b43-9088b198f34c","Type":"ContainerStarted","Data":"424f04867cd8a3508f4b024310b83812c04c165e5556465186846c4594b9ea2c"} Dec 02 14:06:37 crc kubenswrapper[4900]: I1202 14:06:37.410763 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5624f474-dd54-4580-b816-f238cc733b5a","Type":"ContainerStarted","Data":"2b959af286e31deabd35f3b4615973d13c65c11a556390fda2c9977139897d1a"} Dec 02 14:06:37 crc kubenswrapper[4900]: W1202 14:06:37.435489 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8a322a0_752b_4ab1_9418_41c4747eebee.slice/crio-f4d9ad1aefdbffb4fb8d7675c24e88f9e4d1ea02aca5329e6567e8d369219207 WatchSource:0}: Error finding container f4d9ad1aefdbffb4fb8d7675c24e88f9e4d1ea02aca5329e6567e8d369219207: Status 404 returned error can't find the container with id f4d9ad1aefdbffb4fb8d7675c24e88f9e4d1ea02aca5329e6567e8d369219207 Dec 02 14:06:37 crc kubenswrapper[4900]: I1202 14:06:37.442253 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:06:37 crc kubenswrapper[4900]: I1202 14:06:37.445493 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.44546293 podStartE2EDuration="2.44546293s" podCreationTimestamp="2025-12-02 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:37.42695187 +0000 UTC m=+1442.842765731" watchObservedRunningTime="2025-12-02 14:06:37.44546293 +0000 UTC m=+1442.861276821" Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.429940 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8a322a0-752b-4ab1-9418-41c4747eebee","Type":"ContainerStarted","Data":"863da9d070c65e09dd0916cf6ca8bf7a03e08fb91cbe309cd740bc4a5f3a11aa"} Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.430626 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8a322a0-752b-4ab1-9418-41c4747eebee","Type":"ContainerStarted","Data":"81cc101728f5de7b2d97ff757885b53c0f26120c2800d3265c49f79eefd4fe58"} Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.430654 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8a322a0-752b-4ab1-9418-41c4747eebee","Type":"ContainerStarted","Data":"f4d9ad1aefdbffb4fb8d7675c24e88f9e4d1ea02aca5329e6567e8d369219207"} Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.433081 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5624f474-dd54-4580-b816-f238cc733b5a","Type":"ContainerStarted","Data":"111a5b135a319c9f662db3b3e6a11bfecf64162e38bdb1b58f02ab519892e209"} Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.433176 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5624f474-dd54-4580-b816-f238cc733b5a","Type":"ContainerStarted","Data":"23b3303be4e59abe74fc9f9832f7a048be4602b7bb3f5d0f5af2d708139fa0ab"} Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.456809 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.456789678 podStartE2EDuration="2.456789678s" podCreationTimestamp="2025-12-02 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:38.450329896 +0000 UTC m=+1443.866143747" watchObservedRunningTime="2025-12-02 14:06:38.456789678 +0000 UTC m=+1443.872603529" Dec 02 14:06:38 crc kubenswrapper[4900]: I1202 14:06:38.483764 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.483741004 podStartE2EDuration="2.483741004s" podCreationTimestamp="2025-12-02 14:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:38.469252517 +0000 UTC m=+1443.885066368" watchObservedRunningTime="2025-12-02 14:06:38.483741004 +0000 UTC m=+1443.899554855" Dec 02 14:06:40 crc kubenswrapper[4900]: E1202 14:06:40.602451 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-conmon-bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:06:40 crc kubenswrapper[4900]: I1202 14:06:40.760358 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 14:06:41 crc kubenswrapper[4900]: I1202 14:06:41.876433 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:06:41 crc kubenswrapper[4900]: I1202 14:06:41.876509 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:06:45 crc kubenswrapper[4900]: I1202 14:06:45.760711 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 14:06:45 crc kubenswrapper[4900]: I1202 14:06:45.848431 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 14:06:46 crc kubenswrapper[4900]: I1202 14:06:46.596509 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 14:06:46 crc kubenswrapper[4900]: I1202 14:06:46.875961 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:06:46 crc kubenswrapper[4900]: I1202 14:06:46.876010 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:06:46 crc kubenswrapper[4900]: I1202 14:06:46.890982 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:06:46 crc kubenswrapper[4900]: I1202 14:06:46.891033 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:06:47 crc kubenswrapper[4900]: I1202 14:06:47.638602 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 14:06:47 crc kubenswrapper[4900]: I1202 14:06:47.893780 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:47 crc kubenswrapper[4900]: I1202 14:06:47.893799 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:47 crc kubenswrapper[4900]: I1202 14:06:47.907752 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:47 crc kubenswrapper[4900]: I1202 14:06:47.907762 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:50 crc kubenswrapper[4900]: E1202 14:06:50.829455 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-conmon-bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.883714 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.884398 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.888899 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.890495 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.896016 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.896420 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.901127 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:06:56 crc kubenswrapper[4900]: I1202 14:06:56.901479 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:06:57 crc kubenswrapper[4900]: I1202 14:06:57.681986 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:06:57 crc kubenswrapper[4900]: I1202 14:06:57.697099 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:07:01 crc kubenswrapper[4900]: E1202 14:07:01.075027 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-conmon-bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:07:11 crc kubenswrapper[4900]: E1202 14:07:11.355971 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-conmon-bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:07:15 crc kubenswrapper[4900]: I1202 14:07:15.116832 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:07:15 crc kubenswrapper[4900]: I1202 14:07:15.117488 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:07:19 crc kubenswrapper[4900]: I1202 14:07:19.675390 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 02 14:07:19 crc kubenswrapper[4900]: I1202 14:07:19.676187 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="6ff6dcaf-b619-4169-9b36-81ee92264d71" containerName="openstackclient" containerID="cri-o://6c10ca3ad8fceefa6060a64015e329b0a3d68e6291a95453dad98596043ab0ef" gracePeriod=2 Dec 02 14:07:19 crc kubenswrapper[4900]: I1202 14:07:19.706538 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.008206 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.187019 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.187071 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data podName:8db82600-180c-4114-8006-551e1b566ce5 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:20.687058246 +0000 UTC m=+1486.102872097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data") pod "rabbitmq-cell1-server-0" (UID: "8db82600-180c-4114-8006-551e1b566ce5") : configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.380019 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican4c1f-account-delete-ksk64"] Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.380418 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff6dcaf-b619-4169-9b36-81ee92264d71" containerName="openstackclient" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.380433 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff6dcaf-b619-4169-9b36-81ee92264d71" containerName="openstackclient" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.380608 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff6dcaf-b619-4169-9b36-81ee92264d71" containerName="openstackclient" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.381238 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.397110 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.416891 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican4c1f-account-delete-ksk64"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.490061 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementa999-account-delete-25gdq"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.491379 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.493903 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27a1401-3ad1-40a3-9ce6-08cac86fef42-operator-scripts\") pod \"barbican4c1f-account-delete-ksk64\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.493955 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswgl\" (UniqueName: \"kubernetes.io/projected/b27a1401-3ad1-40a3-9ce6-08cac86fef42-kube-api-access-xswgl\") pod \"barbican4c1f-account-delete-ksk64\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.494408 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.494466 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data podName:e410de46-b373-431a-8486-21a6f1268e41 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:20.994449542 +0000 UTC m=+1486.410263393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data") pod "rabbitmq-server-0" (UID: "e410de46-b373-431a-8486-21a6f1268e41") : configmap "rabbitmq-config-data" not found Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.536476 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementa999-account-delete-25gdq"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.562921 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7h46p"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.598672 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmm6\" (UniqueName: \"kubernetes.io/projected/5d121fee-d98a-4dcd-ba07-1d4b2015460d-kube-api-access-2jmm6\") pod \"placementa999-account-delete-25gdq\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.598877 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d121fee-d98a-4dcd-ba07-1d4b2015460d-operator-scripts\") pod \"placementa999-account-delete-25gdq\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.598957 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27a1401-3ad1-40a3-9ce6-08cac86fef42-operator-scripts\") pod \"barbican4c1f-account-delete-ksk64\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.599016 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswgl\" (UniqueName: \"kubernetes.io/projected/b27a1401-3ad1-40a3-9ce6-08cac86fef42-kube-api-access-xswgl\") pod \"barbican4c1f-account-delete-ksk64\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.600480 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27a1401-3ad1-40a3-9ce6-08cac86fef42-operator-scripts\") pod \"barbican4c1f-account-delete-ksk64\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.644711 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7h46p"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.674896 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswgl\" (UniqueName: \"kubernetes.io/projected/b27a1401-3ad1-40a3-9ce6-08cac86fef42-kube-api-access-xswgl\") pod \"barbican4c1f-account-delete-ksk64\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.702070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jmm6\" (UniqueName: \"kubernetes.io/projected/5d121fee-d98a-4dcd-ba07-1d4b2015460d-kube-api-access-2jmm6\") pod \"placementa999-account-delete-25gdq\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.711352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d121fee-d98a-4dcd-ba07-1d4b2015460d-operator-scripts\") pod \"placementa999-account-delete-25gdq\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.717132 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.717208 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data podName:8db82600-180c-4114-8006-551e1b566ce5 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:21.717190242 +0000 UTC m=+1487.133004093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data") pod "rabbitmq-cell1-server-0" (UID: "8db82600-180c-4114-8006-551e1b566ce5") : configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.703630 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.718449 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d121fee-d98a-4dcd-ba07-1d4b2015460d-operator-scripts\") pod \"placementa999-account-delete-25gdq\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.703673 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance0080-account-delete-fz4nc"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.722106 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.737232 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jmm6\" (UniqueName: \"kubernetes.io/projected/5d121fee-d98a-4dcd-ba07-1d4b2015460d-kube-api-access-2jmm6\") pod \"placementa999-account-delete-25gdq\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.783681 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.784035 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="openstack-network-exporter" containerID="cri-o://a10158b3879ca4655fc8e6391c12e72686bdf7aae27551ad7cd381abaa366312" gracePeriod=300 Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.823226 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr58\" (UniqueName: \"kubernetes.io/projected/df8a1411-7582-4f42-8b5a-3b97cebd9254-kube-api-access-hwr58\") pod \"glance0080-account-delete-fz4nc\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.823541 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df8a1411-7582-4f42-8b5a-3b97cebd9254-operator-scripts\") pod \"glance0080-account-delete-fz4nc\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.834840 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance0080-account-delete-fz4nc"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.835442 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.853086 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4dw9x"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.882054 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4dw9x"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.899177 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mqkt5"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.899481 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="dnsmasq-dns" containerID="cri-o://ba3a70bf8272e8aca22d682efbb037a31975cc819affc7f24193c35cc53da8ce" gracePeriod=10 Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.926925 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df8a1411-7582-4f42-8b5a-3b97cebd9254-operator-scripts\") pod \"glance0080-account-delete-fz4nc\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.927399 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwr58\" (UniqueName: \"kubernetes.io/projected/df8a1411-7582-4f42-8b5a-3b97cebd9254-kube-api-access-hwr58\") pod \"glance0080-account-delete-fz4nc\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.928718 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df8a1411-7582-4f42-8b5a-3b97cebd9254-operator-scripts\") pod \"glance0080-account-delete-fz4nc\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.955539 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="ovsdbserver-nb" containerID="cri-o://757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0" gracePeriod=300 Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.967389 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwr58\" (UniqueName: \"kubernetes.io/projected/df8a1411-7582-4f42-8b5a-3b97cebd9254-kube-api-access-hwr58\") pod \"glance0080-account-delete-fz4nc\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.976857 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 02 14:07:20 crc kubenswrapper[4900]: E1202 14:07:20.988263 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.989012 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433a8e08-2261-425b-97d1-2b61ad9ae5f9" path="/var/lib/kubelet/pods/433a8e08-2261-425b-97d1-2b61ad9ae5f9/volumes" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.989865 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95452ca6-e25a-44d1-a666-eb99c921ae7c" path="/var/lib/kubelet/pods/95452ca6-e25a-44d1-a666-eb99c921ae7c/volumes" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.990506 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder5ba5-account-delete-kd6ml"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.993588 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder5ba5-account-delete-kd6ml"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.993612 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.993996 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="openstack-network-exporter" containerID="cri-o://285f3512fc061d05a5061746933aff39280feb7e3c81097b9fc9c0b9cf0d32da" gracePeriod=300 Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.994281 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:20 crc kubenswrapper[4900]: I1202 14:07:20.997166 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron1777-account-delete-mh9jf"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:20.998891 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.002181 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.002251 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="ovsdbserver-nb" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.008723 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron1777-account-delete-mh9jf"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.025322 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wt5sd"] Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.042674 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.042727 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data podName:e410de46-b373-431a-8486-21a6f1268e41 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:22.042711476 +0000 UTC m=+1487.458525317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data") pod "rabbitmq-server-0" (UID: "e410de46-b373-431a-8486-21a6f1268e41") : configmap "rabbitmq-config-data" not found Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.048430 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-wt5sd"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.061126 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wrtvb"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.073632 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wrtvb"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.082966 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.083558 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="ovn-northd" containerID="cri-o://9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.084020 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="openstack-network-exporter" containerID="cri-o://21b9a43c02558258bc5549999999ff72f00f4644cbc3a254387cc0fa7154a8d5" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.136121 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell02641-account-delete-zn9bc"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.139947 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="ovsdbserver-sb" containerID="cri-o://d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93" gracePeriod=300 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.149086 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.149870 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk7c\" (UniqueName: \"kubernetes.io/projected/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-kube-api-access-rsk7c\") pod \"cinder5ba5-account-delete-kd6ml\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.150018 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf57v\" (UniqueName: \"kubernetes.io/projected/8c322124-103a-40d2-a429-f018076f88ff-kube-api-access-zf57v\") pod \"neutron1777-account-delete-mh9jf\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.150058 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-operator-scripts\") pod \"cinder5ba5-account-delete-kd6ml\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.150172 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c322124-103a-40d2-a429-f018076f88ff-operator-scripts\") pod \"neutron1777-account-delete-mh9jf\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.156204 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell02641-account-delete-zn9bc"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.194339 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapib12b-account-delete-rl98n"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.202280 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.204364 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.212724 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapib12b-account-delete-rl98n"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.257704 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6fhqb"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261248 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c322124-103a-40d2-a429-f018076f88ff-operator-scripts\") pod \"neutron1777-account-delete-mh9jf\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261321 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-operator-scripts\") pod \"novacell02641-account-delete-zn9bc\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261353 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8t4\" (UniqueName: \"kubernetes.io/projected/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-kube-api-access-nj8t4\") pod \"novacell02641-account-delete-zn9bc\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261478 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsk7c\" (UniqueName: \"kubernetes.io/projected/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-kube-api-access-rsk7c\") pod \"cinder5ba5-account-delete-kd6ml\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261557 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ab71b1-8ff6-488c-9401-9b63341b08dd-operator-scripts\") pod \"novaapib12b-account-delete-rl98n\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261655 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgktq\" (UniqueName: \"kubernetes.io/projected/60ab71b1-8ff6-488c-9401-9b63341b08dd-kube-api-access-zgktq\") pod \"novaapib12b-account-delete-rl98n\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261861 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf57v\" (UniqueName: \"kubernetes.io/projected/8c322124-103a-40d2-a429-f018076f88ff-kube-api-access-zf57v\") pod \"neutron1777-account-delete-mh9jf\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.261922 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-operator-scripts\") pod \"cinder5ba5-account-delete-kd6ml\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.263128 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c322124-103a-40d2-a429-f018076f88ff-operator-scripts\") pod \"neutron1777-account-delete-mh9jf\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.263173 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-operator-scripts\") pod \"cinder5ba5-account-delete-kd6ml\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.269064 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b9s52"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.282174 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf57v\" (UniqueName: \"kubernetes.io/projected/8c322124-103a-40d2-a429-f018076f88ff-kube-api-access-zf57v\") pod \"neutron1777-account-delete-mh9jf\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.287570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsk7c\" (UniqueName: \"kubernetes.io/projected/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-kube-api-access-rsk7c\") pod \"cinder5ba5-account-delete-kd6ml\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.294222 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6fhqb"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.329710 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b9s52"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.363803 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgktq\" (UniqueName: \"kubernetes.io/projected/60ab71b1-8ff6-488c-9401-9b63341b08dd-kube-api-access-zgktq\") pod \"novaapib12b-account-delete-rl98n\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.363943 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-operator-scripts\") pod \"novacell02641-account-delete-zn9bc\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.363964 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8t4\" (UniqueName: \"kubernetes.io/projected/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-kube-api-access-nj8t4\") pod \"novacell02641-account-delete-zn9bc\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.364021 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ab71b1-8ff6-488c-9401-9b63341b08dd-operator-scripts\") pod \"novaapib12b-account-delete-rl98n\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.366150 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-operator-scripts\") pod \"novacell02641-account-delete-zn9bc\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.370714 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ab71b1-8ff6-488c-9401-9b63341b08dd-operator-scripts\") pod \"novaapib12b-account-delete-rl98n\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.381608 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-g4flw"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.381867 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-g4flw" podUID="22b06684-2db9-4dca-aa15-53b22ca686d0" containerName="openstack-network-exporter" containerID="cri-o://19705f019e43eb0fec10afe7795a2b153f6e8f761831faa811bf6940dbd55294" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.393206 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-9cwqh"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.397086 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8t4\" (UniqueName: \"kubernetes.io/projected/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-kube-api-access-nj8t4\") pod \"novacell02641-account-delete-zn9bc\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.397240 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgktq\" (UniqueName: \"kubernetes.io/projected/60ab71b1-8ff6-488c-9401-9b63341b08dd-kube-api-access-zgktq\") pod \"novaapib12b-account-delete-rl98n\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.398822 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.410682 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gn6td"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.474948 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.500522 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: connect: connection refused" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.518559 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.519327 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93 is running failed: container process not found" containerID="d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.526485 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93 is running failed: container process not found" containerID="d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.526996 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93 is running failed: container process not found" containerID="d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.527025 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="ovsdbserver-sb" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.538767 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.571201 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f6dffdfb8-h46pm"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.571938 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6f6dffdfb8-h46pm" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-log" containerID="cri-o://72b6e3300d0787fe99949f1eade7bb409bf6f76d9bb245ec44e8976a1315be81" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.573117 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6f6dffdfb8-h46pm" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-api" containerID="cri-o://6554b3a343d89d6f8889d9cc9f50c9bd71066e708a1b0d388e5faa67db8d54dc" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.689292 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.689924 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-server" containerID="cri-o://5e0242301bbd13a18a7ab682fc5ef7d58a6f6c86146abab5ab241882c022c72e" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690379 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="swift-recon-cron" containerID="cri-o://6f12993e1fc195acb36a4222c9e80cc1d4aeaa566382dddf8b897df3ae681468" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690449 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="rsync" containerID="cri-o://92b595b2d89b2be8cfc2216546011c9aad218c2d134cbf0d7dd2eeded97e32ae" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690485 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-expirer" containerID="cri-o://650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690529 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-updater" containerID="cri-o://5821e46042485c1373fa8cae7c61b288c7c4cea999d146348d992d1f1ebe01ae" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690574 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-auditor" containerID="cri-o://b9903237aae7b30e4786154f26720bc4cccb8456c76a64b913e79db33e9723cc" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690618 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-replicator" containerID="cri-o://25219f1dbf2d7a01dd6cfe25cfa91ecaaaabec4527f8896e9ed0b10b42b25db3" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690669 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-server" containerID="cri-o://7f2a46fb8785892c4a865fae00d8ed6142ab75fae046b42634d84a99c5fcf69d" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690704 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-updater" containerID="cri-o://8e6a67bb6f1294f115624e7162a130f3eabff83ef59d7b2a1a87dc5e03f7e6e7" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690758 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-auditor" containerID="cri-o://535a4b01d9acc099e8e0cf36306f3d1613b8d40a0a2886c27a5e3adb4d22106c" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690787 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-replicator" containerID="cri-o://304355d78e40f6ca3b22a607c420ecdb93fd14f1a0a1d10ee78e70aca9138742" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.691009 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-auditor" containerID="cri-o://38eff436fe11c5890e207833fe423224c1e521b3b82a519361fefcff2af660ad" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.691069 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-server" containerID="cri-o://bf6dbc2d90f268fe7fed54cad255fdedb06111980e4d028a6b734115fcd4bff2" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.691029 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-replicator" containerID="cri-o://46f15348813d8055006838bad9d40dbb909e9eabfc30521e1baeaf728552da63" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.690787 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-reaper" containerID="cri-o://10a9aaa8d1a2413e0ef899da8043a3d293c39ba29883684daac125f654e247c6" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.729395 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8jhpm"] Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.738051 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:21 crc kubenswrapper[4900]: E1202 14:07:21.738120 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data podName:8db82600-180c-4114-8006-551e1b566ce5 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:23.738104389 +0000 UTC m=+1489.153918240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data") pod "rabbitmq-cell1-server-0" (UID: "8db82600-180c-4114-8006-551e1b566ce5") : configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.746690 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8jhpm"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.769761 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-fsfpg"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.806073 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-fsfpg"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.815158 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican4c1f-account-delete-ksk64"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.863056 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.863308 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-log" containerID="cri-o://9ad89e0edddd80cb4770a29e75a6ba59954a7129d14ce51b8b71ef393689cab0" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.863826 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-httpd" containerID="cri-o://7eb9fd41a54b8a69b7cc6d75b1be55aec13e8d931357769f99ce0bad86542d63" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.878220 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.896534 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.897323 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="cinder-scheduler" containerID="cri-o://7e2a5e62fdfd6d58e261c3a59e316fbb35eb6ecbeecd1f0c5b424484dcb2b5d5" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.897488 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="probe" containerID="cri-o://75b061719509895544c5101526474e3593e712a83ea8bf19f5d39b1e05838e7d" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.915507 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d9bd66cf-nlpm2"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.915855 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d9bd66cf-nlpm2" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-api" containerID="cri-o://dbab360a13373b0f107811e32ac3f4e9da16fc04fec31450a8afa527c07e139b" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.917926 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d9bd66cf-nlpm2" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-httpd" containerID="cri-o://ff24395ff17544005ed3b0c813dfed8d4179e1e8e38687a4303ee6b98024dcbd" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.994578 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.994990 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-log" containerID="cri-o://8b057749e05312ac8b867b23be07aedbc2a70bbb08e3cd32bd27a1b2582ac140" gracePeriod=30 Dec 02 14:07:21 crc kubenswrapper[4900]: I1202 14:07:21.995620 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-httpd" containerID="cri-o://974bb345b96f61cacf2fe7abb81edc513dd4982f9f5eaf42cece693b6d995322" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.018817 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.019161 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api-log" containerID="cri-o://780db50c9c1ada3d5ce136afa95e168fc995789ee6f6731c4c9529970d7dfd6e" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.019693 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api" containerID="cri-o://440f785e6ac340819ae403625dc734fd43ed1abbd0b52db9080939d07419abce" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.046895 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.046956 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data podName:e410de46-b373-431a-8486-21a6f1268e41 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:24.046942585 +0000 UTC m=+1489.462756436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data") pod "rabbitmq-server-0" (UID: "e410de46-b373-431a-8486-21a6f1268e41") : configmap "rabbitmq-config-data" not found Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.074655 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="rabbitmq" containerID="cri-o://c9b48d55f32d54ed9f77fab0b281d7e2bb2a1783f7388f1bec82ef0b685bf983" gracePeriod=604800 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.110157 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79db7cb55d-4cs7x"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.110401 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79db7cb55d-4cs7x" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api-log" containerID="cri-o://4735b662992f5922ce1cad03f6fa4def9946bffac92b4623b987fa581665c1db" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.110827 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79db7cb55d-4cs7x" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api" containerID="cri-o://32bcc73f011b1518d11ed9404adb699f4c8c5a3bbe633bebd10a6b2b3117dd08" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.128394 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c353c599-462c-4196-a35c-7622350bb349/ovsdbserver-nb/0.log" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.128437 4900 generic.go:334] "Generic (PLEG): container finished" podID="c353c599-462c-4196-a35c-7622350bb349" containerID="a10158b3879ca4655fc8e6391c12e72686bdf7aae27551ad7cd381abaa366312" exitCode=2 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.128451 4900 generic.go:334] "Generic (PLEG): container finished" podID="c353c599-462c-4196-a35c-7622350bb349" containerID="757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0" exitCode=143 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.128492 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c353c599-462c-4196-a35c-7622350bb349","Type":"ContainerDied","Data":"a10158b3879ca4655fc8e6391c12e72686bdf7aae27551ad7cd381abaa366312"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.128517 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c353c599-462c-4196-a35c-7622350bb349","Type":"ContainerDied","Data":"757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.135625 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-ff8b8d959-29bd8"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.135854 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-ff8b8d959-29bd8" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker-log" containerID="cri-o://58ddf684d77381c4c34646d9e1659713b257f24bcc73342818f13e7ed7d7268f" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.136218 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-ff8b8d959-29bd8" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker" containerID="cri-o://e987a3eb684f39a5280336cf0f24e6d52b9537943b4ae3c78e42d02eaacbba03" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.156014 4900 generic.go:334] "Generic (PLEG): container finished" podID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerID="72b6e3300d0787fe99949f1eade7bb409bf6f76d9bb245ec44e8976a1315be81" exitCode=143 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.156084 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f6dffdfb8-h46pm" event={"ID":"7b0e50c7-752e-4879-a382-ff97500cfd89","Type":"ContainerDied","Data":"72b6e3300d0787fe99949f1eade7bb409bf6f76d9bb245ec44e8976a1315be81"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.186426 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c79b4474d-mx7p9"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.186696 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener-log" containerID="cri-o://9b9a1adc9bf6bc055b6124dbfb0cf0940f73cceff1fb98ba82f90ebb4fa7c9e3" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.187222 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener" containerID="cri-o://683ad46b79d8da86e3dc06d5fc634651aa5b590466fe3ab013890ca87d56975d" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.209349 4900 generic.go:334] "Generic (PLEG): container finished" podID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerID="9ad89e0edddd80cb4770a29e75a6ba59954a7129d14ce51b8b71ef393689cab0" exitCode=143 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.209602 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a302619-4a69-4e62-b7cb-6812b771f6d4","Type":"ContainerDied","Data":"9ad89e0edddd80cb4770a29e75a6ba59954a7129d14ce51b8b71ef393689cab0"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.217541 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.241444 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican4c1f-account-delete-ksk64" event={"ID":"b27a1401-3ad1-40a3-9ce6-08cac86fef42","Type":"ContainerStarted","Data":"7eeab5cd14274b5f1ba5f97ff7534d790ea20e4f0ea3be993b72b16c6a0c6052"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.253215 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.253437 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-log" containerID="cri-o://23b3303be4e59abe74fc9f9832f7a048be4602b7bb3f5d0f5af2d708139fa0ab" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.253858 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-metadata" containerID="cri-o://111a5b135a319c9f662db3b3e6a11bfecf64162e38bdb1b58f02ab519892e209" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.261386 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.272936 4900 generic.go:334] "Generic (PLEG): container finished" podID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerID="21b9a43c02558258bc5549999999ff72f00f4644cbc3a254387cc0fa7154a8d5" exitCode=2 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.273041 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c17cf84-2174-42d8-880a-9a643a161ef4","Type":"ContainerDied","Data":"21b9a43c02558258bc5549999999ff72f00f4644cbc3a254387cc0fa7154a8d5"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.276053 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g4flw_22b06684-2db9-4dca-aa15-53b22ca686d0/openstack-network-exporter/0.log" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.276097 4900 generic.go:334] "Generic (PLEG): container finished" podID="22b06684-2db9-4dca-aa15-53b22ca686d0" containerID="19705f019e43eb0fec10afe7795a2b153f6e8f761831faa811bf6940dbd55294" exitCode=2 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.276142 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g4flw" event={"ID":"22b06684-2db9-4dca-aa15-53b22ca686d0","Type":"ContainerDied","Data":"19705f019e43eb0fec10afe7795a2b153f6e8f761831faa811bf6940dbd55294"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.276534 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.276747 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-log" containerID="cri-o://81cc101728f5de7b2d97ff757885b53c0f26120c2800d3265c49f79eefd4fe58" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.277144 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-api" containerID="cri-o://863da9d070c65e09dd0916cf6ca8bf7a03e08fb91cbe309cd740bc4a5f3a11aa" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.288100 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_096b1286-863b-44aa-ac7e-5cd509d99950/ovsdbserver-sb/0.log" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.288142 4900 generic.go:334] "Generic (PLEG): container finished" podID="096b1286-863b-44aa-ac7e-5cd509d99950" containerID="285f3512fc061d05a5061746933aff39280feb7e3c81097b9fc9c0b9cf0d32da" exitCode=2 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.288155 4900 generic.go:334] "Generic (PLEG): container finished" podID="096b1286-863b-44aa-ac7e-5cd509d99950" containerID="d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93" exitCode=143 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.288233 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"096b1286-863b-44aa-ac7e-5cd509d99950","Type":"ContainerDied","Data":"285f3512fc061d05a5061746933aff39280feb7e3c81097b9fc9c0b9cf0d32da"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.288256 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"096b1286-863b-44aa-ac7e-5cd509d99950","Type":"ContainerDied","Data":"d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.292707 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2n7dx"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.298748 4900 generic.go:334] "Generic (PLEG): container finished" podID="6ff6dcaf-b619-4169-9b36-81ee92264d71" containerID="6c10ca3ad8fceefa6060a64015e329b0a3d68e6291a95453dad98596043ab0ef" exitCode=137 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.308919 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2n7dx"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.315896 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ccbf-account-create-update-z9clm"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.331429 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ccbf-account-create-update-z9clm"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.341230 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.341522 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9d4da9c7aa6120d5ccd058c5a050090e9130cfb769ef31b34092dd5d53ce8475" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.364743 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementa999-account-delete-25gdq"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.367454 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="rabbitmq" containerID="cri-o://983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403" gracePeriod=604800 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.373029 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.373269 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ed8121d7-7b10-44c5-9b43-9088b198f34c" containerName="nova-scheduler-scheduler" containerID="cri-o://88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.385758 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z7hzq"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.390486 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-z7hzq"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.396797 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" containerID="cri-o://2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" gracePeriod=29 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.398723 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.398976 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" containerName="nova-cell1-conductor-conductor" containerID="cri-o://510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: W1202 14:07:22.400721 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d121fee_d98a_4dcd_ba07_1d4b2015460d.slice/crio-b42c2792f703fe7c00954295ae0cb411363acfaab7617c86614e6e1e9c9c4125 WatchSource:0}: Error finding container b42c2792f703fe7c00954295ae0cb411363acfaab7617c86614e6e1e9c9c4125: Status 404 returned error can't find the container with id b42c2792f703fe7c00954295ae0cb411363acfaab7617c86614e6e1e9c9c4125 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400849 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400886 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="5821e46042485c1373fa8cae7c61b288c7c4cea999d146348d992d1f1ebe01ae" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400897 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="b9903237aae7b30e4786154f26720bc4cccb8456c76a64b913e79db33e9723cc" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400907 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="25219f1dbf2d7a01dd6cfe25cfa91ecaaaabec4527f8896e9ed0b10b42b25db3" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400918 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="8e6a67bb6f1294f115624e7162a130f3eabff83ef59d7b2a1a87dc5e03f7e6e7" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400928 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="38eff436fe11c5890e207833fe423224c1e521b3b82a519361fefcff2af660ad" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400938 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="46f15348813d8055006838bad9d40dbb909e9eabfc30521e1baeaf728552da63" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400948 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="10a9aaa8d1a2413e0ef899da8043a3d293c39ba29883684daac125f654e247c6" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400957 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="535a4b01d9acc099e8e0cf36306f3d1613b8d40a0a2886c27a5e3adb4d22106c" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.400964 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="304355d78e40f6ca3b22a607c420ecdb93fd14f1a0a1d10ee78e70aca9138742" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401039 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401072 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"5821e46042485c1373fa8cae7c61b288c7c4cea999d146348d992d1f1ebe01ae"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401087 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"b9903237aae7b30e4786154f26720bc4cccb8456c76a64b913e79db33e9723cc"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401099 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"25219f1dbf2d7a01dd6cfe25cfa91ecaaaabec4527f8896e9ed0b10b42b25db3"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401112 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"8e6a67bb6f1294f115624e7162a130f3eabff83ef59d7b2a1a87dc5e03f7e6e7"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401125 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"38eff436fe11c5890e207833fe423224c1e521b3b82a519361fefcff2af660ad"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401136 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"46f15348813d8055006838bad9d40dbb909e9eabfc30521e1baeaf728552da63"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401146 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"10a9aaa8d1a2413e0ef899da8043a3d293c39ba29883684daac125f654e247c6"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401157 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"535a4b01d9acc099e8e0cf36306f3d1613b8d40a0a2886c27a5e3adb4d22106c"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.401170 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"304355d78e40f6ca3b22a607c420ecdb93fd14f1a0a1d10ee78e70aca9138742"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.407732 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bj75w"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.415900 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.416089 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.417869 4900 generic.go:334] "Generic (PLEG): container finished" podID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerID="ba3a70bf8272e8aca22d682efbb037a31975cc819affc7f24193c35cc53da8ce" exitCode=0 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.417907 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" event={"ID":"7d0c0900-1e02-4dec-8e4c-a32f7f560a58","Type":"ContainerDied","Data":"ba3a70bf8272e8aca22d682efbb037a31975cc819affc7f24193c35cc53da8ce"} Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.421973 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bj75w"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.485161 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc353c599_462c_4196_a35c_7622350bb349.slice/crio-conmon-757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22b06684_2db9_4dca_aa15_53b22ca686d0.slice/crio-conmon-19705f019e43eb0fec10afe7795a2b153f6e8f761831faa811bf6940dbd55294.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a302619_4a69_4e62_b7cb_6812b771f6d4.slice/crio-conmon-9ad89e0edddd80cb4770a29e75a6ba59954a7129d14ce51b8b71ef393689cab0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-25219f1dbf2d7a01dd6cfe25cfa91ecaaaabec4527f8896e9ed0b10b42b25db3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-bf6dbc2d90f268fe7fed54cad255fdedb06111980e4d028a6b734115fcd4bff2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-conmon-5821e46042485c1373fa8cae7c61b288c7c4cea999d146348d992d1f1ebe01ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-5e0242301bbd13a18a7ab682fc5ef7d58a6f6c86146abab5ab241882c022c72e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-conmon-650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc80fa7_9bd3_4ed5_81d3_dfb8caa08571.slice/crio-conmon-4735b662992f5922ce1cad03f6fa4def9946bffac92b4623b987fa581665c1db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096b1286_863b_44aa_ac7e_5cd509d99950.slice/crio-conmon-d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-8e6a67bb6f1294f115624e7162a130f3eabff83ef59d7b2a1a87dc5e03f7e6e7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-conmon-bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e062e50_5a22_45c0_adab_9f78980eb851.slice/crio-conmon-8b057749e05312ac8b867b23be07aedbc2a70bbb08e3cd32bd27a1b2582ac140.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff6dcaf_b619_4169_9b36_81ee92264d71.slice/crio-6c10ca3ad8fceefa6060a64015e329b0a3d68e6291a95453dad98596043ab0ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod241c5e6f_d993_4c7a_90a2_1ae1786dbea2.slice/crio-9b9a1adc9bf6bc055b6124dbfb0cf0940f73cceff1fb98ba82f90ebb4fa7c9e3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8a322a0_752b_4ab1_9418_41c4747eebee.slice/crio-81cc101728f5de7b2d97ff757885b53c0f26120c2800d3265c49f79eefd4fe58.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod557a84eb_0882_44c1_b4db_7c8a19e1303d.slice/crio-780db50c9c1ada3d5ce136afa95e168fc995789ee6f6731c4c9529970d7dfd6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-conmon-38eff436fe11c5890e207833fe423224c1e521b3b82a519361fefcff2af660ad.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305da939_8e7b_4fce_95f9_95d90218a1f0.slice/crio-7f2a46fb8785892c4a865fae00d8ed6142ab75fae046b42634d84a99c5fcf69d.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.641358 4900 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 02 14:07:22 crc kubenswrapper[4900]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 02 14:07:22 crc kubenswrapper[4900]: + source /usr/local/bin/container-scripts/functions Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNBridge=br-int Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNRemote=tcp:localhost:6642 Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNEncapType=geneve Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNAvailabilityZones= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ EnableChassisAsGateway=true Dec 02 14:07:22 crc kubenswrapper[4900]: ++ PhysicalNetworks= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNHostName= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 02 14:07:22 crc kubenswrapper[4900]: ++ ovs_dir=/var/lib/openvswitch Dec 02 14:07:22 crc kubenswrapper[4900]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 02 14:07:22 crc kubenswrapper[4900]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 02 14:07:22 crc kubenswrapper[4900]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + sleep 0.5 Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + sleep 0.5 Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + cleanup_ovsdb_server_semaphore Dec 02 14:07:22 crc kubenswrapper[4900]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 02 14:07:22 crc kubenswrapper[4900]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 02 14:07:22 crc kubenswrapper[4900]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-9cwqh" message=< Dec 02 14:07:22 crc kubenswrapper[4900]: Exiting ovsdb-server (5) [ OK ] Dec 02 14:07:22 crc kubenswrapper[4900]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 02 14:07:22 crc kubenswrapper[4900]: + source /usr/local/bin/container-scripts/functions Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNBridge=br-int Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNRemote=tcp:localhost:6642 Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNEncapType=geneve Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNAvailabilityZones= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ EnableChassisAsGateway=true Dec 02 14:07:22 crc kubenswrapper[4900]: ++ PhysicalNetworks= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNHostName= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 02 14:07:22 crc kubenswrapper[4900]: ++ ovs_dir=/var/lib/openvswitch Dec 02 14:07:22 crc kubenswrapper[4900]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 02 14:07:22 crc kubenswrapper[4900]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 02 14:07:22 crc kubenswrapper[4900]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + sleep 0.5 Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + sleep 0.5 Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + cleanup_ovsdb_server_semaphore Dec 02 14:07:22 crc kubenswrapper[4900]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 02 14:07:22 crc kubenswrapper[4900]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 02 14:07:22 crc kubenswrapper[4900]: > Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.641851 4900 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 02 14:07:22 crc kubenswrapper[4900]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 02 14:07:22 crc kubenswrapper[4900]: + source /usr/local/bin/container-scripts/functions Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNBridge=br-int Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNRemote=tcp:localhost:6642 Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNEncapType=geneve Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNAvailabilityZones= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ EnableChassisAsGateway=true Dec 02 14:07:22 crc kubenswrapper[4900]: ++ PhysicalNetworks= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ OVNHostName= Dec 02 14:07:22 crc kubenswrapper[4900]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 02 14:07:22 crc kubenswrapper[4900]: ++ ovs_dir=/var/lib/openvswitch Dec 02 14:07:22 crc kubenswrapper[4900]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 02 14:07:22 crc kubenswrapper[4900]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 02 14:07:22 crc kubenswrapper[4900]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + sleep 0.5 Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + sleep 0.5 Dec 02 14:07:22 crc kubenswrapper[4900]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 02 14:07:22 crc kubenswrapper[4900]: + cleanup_ovsdb_server_semaphore Dec 02 14:07:22 crc kubenswrapper[4900]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 02 14:07:22 crc kubenswrapper[4900]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 02 14:07:22 crc kubenswrapper[4900]: > pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" containerID="cri-o://231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.641917 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" containerID="cri-o://231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" gracePeriod=29 Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.646468 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="galera" containerID="cri-o://f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9" gracePeriod=30 Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.693471 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.696133 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.718468 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.723937 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.723976 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.754861 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.774635 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:22 crc kubenswrapper[4900]: E1202 14:07:22.774708 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.872177 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance0080-account-delete-fz4nc"] Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.947325 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dddd71-8010-481f-873c-b50f112e39ef" path="/var/lib/kubelet/pods/28dddd71-8010-481f-873c-b50f112e39ef/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.947887 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db11313-933b-4905-acd9-47c95d3014eb" path="/var/lib/kubelet/pods/3db11313-933b-4905-acd9-47c95d3014eb/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.948388 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51675b3c-124f-44aa-b629-c771287652ef" path="/var/lib/kubelet/pods/51675b3c-124f-44aa-b629-c771287652ef/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.949113 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62727648-546e-4e0e-9786-75f8bcd2e332" path="/var/lib/kubelet/pods/62727648-546e-4e0e-9786-75f8bcd2e332/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.950100 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6689f553-a564-4dfa-982a-bedd8787e343" path="/var/lib/kubelet/pods/6689f553-a564-4dfa-982a-bedd8787e343/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.950619 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f87d7bb-973b-441c-9bb7-18a6e9532691" path="/var/lib/kubelet/pods/9f87d7bb-973b-441c-9bb7-18a6e9532691/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.951116 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28b512a-2406-4ad9-a594-7d408b8d3fb6" path="/var/lib/kubelet/pods/d28b512a-2406-4ad9-a594-7d408b8d3fb6/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.953277 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cec85c-cdfd-4f4c-bae9-629ab14653a4" path="/var/lib/kubelet/pods/d3cec85c-cdfd-4f4c-bae9-629ab14653a4/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.954706 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1df1b2-175e-4695-a514-0378d69d38f9" path="/var/lib/kubelet/pods/ea1df1b2-175e-4695-a514-0378d69d38f9/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.955185 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g4flw_22b06684-2db9-4dca-aa15-53b22ca686d0/openstack-network-exporter/0.log" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.955245 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.958985 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd24b5dd-8bba-467d-977a-cbd11c05e52b" path="/var/lib/kubelet/pods/fd24b5dd-8bba-467d-977a-cbd11c05e52b/volumes" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.988081 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:07:22 crc kubenswrapper[4900]: I1202 14:07:22.998606 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.093577 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-config\") pod \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.094982 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config-secret\") pod \"6ff6dcaf-b619-4169-9b36-81ee92264d71\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095012 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-sb\") pod \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095035 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-svc\") pod \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095118 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b06684-2db9-4dca-aa15-53b22ca686d0-config\") pod \"22b06684-2db9-4dca-aa15-53b22ca686d0\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095191 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79pk\" (UniqueName: \"kubernetes.io/projected/22b06684-2db9-4dca-aa15-53b22ca686d0-kube-api-access-s79pk\") pod \"22b06684-2db9-4dca-aa15-53b22ca686d0\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095231 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovs-rundir\") pod \"22b06684-2db9-4dca-aa15-53b22ca686d0\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095252 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-nb\") pod \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095271 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovn-rundir\") pod \"22b06684-2db9-4dca-aa15-53b22ca686d0\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095311 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-swift-storage-0\") pod \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095344 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-combined-ca-bundle\") pod \"22b06684-2db9-4dca-aa15-53b22ca686d0\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095380 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config\") pod \"6ff6dcaf-b619-4169-9b36-81ee92264d71\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095426 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85zfn\" (UniqueName: \"kubernetes.io/projected/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-kube-api-access-85zfn\") pod \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\" (UID: \"7d0c0900-1e02-4dec-8e4c-a32f7f560a58\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095496 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xc8r\" (UniqueName: \"kubernetes.io/projected/6ff6dcaf-b619-4169-9b36-81ee92264d71-kube-api-access-7xc8r\") pod \"6ff6dcaf-b619-4169-9b36-81ee92264d71\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095527 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-metrics-certs-tls-certs\") pod \"22b06684-2db9-4dca-aa15-53b22ca686d0\" (UID: \"22b06684-2db9-4dca-aa15-53b22ca686d0\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.095591 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-combined-ca-bundle\") pod \"6ff6dcaf-b619-4169-9b36-81ee92264d71\" (UID: \"6ff6dcaf-b619-4169-9b36-81ee92264d71\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.096332 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "22b06684-2db9-4dca-aa15-53b22ca686d0" (UID: "22b06684-2db9-4dca-aa15-53b22ca686d0"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.097493 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "22b06684-2db9-4dca-aa15-53b22ca686d0" (UID: "22b06684-2db9-4dca-aa15-53b22ca686d0"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.098071 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b06684-2db9-4dca-aa15-53b22ca686d0-config" (OuterVolumeSpecName: "config") pod "22b06684-2db9-4dca-aa15-53b22ca686d0" (UID: "22b06684-2db9-4dca-aa15-53b22ca686d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.108067 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_096b1286-863b-44aa-ac7e-5cd509d99950/ovsdbserver-sb/0.log" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.108193 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.122045 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-kube-api-access-85zfn" (OuterVolumeSpecName: "kube-api-access-85zfn") pod "7d0c0900-1e02-4dec-8e4c-a32f7f560a58" (UID: "7d0c0900-1e02-4dec-8e4c-a32f7f560a58"). InnerVolumeSpecName "kube-api-access-85zfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.122614 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b06684-2db9-4dca-aa15-53b22ca686d0-kube-api-access-s79pk" (OuterVolumeSpecName: "kube-api-access-s79pk") pod "22b06684-2db9-4dca-aa15-53b22ca686d0" (UID: "22b06684-2db9-4dca-aa15-53b22ca686d0"). InnerVolumeSpecName "kube-api-access-s79pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.126340 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff6dcaf-b619-4169-9b36-81ee92264d71-kube-api-access-7xc8r" (OuterVolumeSpecName: "kube-api-access-7xc8r") pod "6ff6dcaf-b619-4169-9b36-81ee92264d71" (UID: "6ff6dcaf-b619-4169-9b36-81ee92264d71"). InnerVolumeSpecName "kube-api-access-7xc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198307 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lsjk\" (UniqueName: \"kubernetes.io/projected/096b1286-863b-44aa-ac7e-5cd509d99950-kube-api-access-5lsjk\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198389 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-config\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198472 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198634 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-combined-ca-bundle\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198802 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdb-rundir\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198834 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-metrics-certs-tls-certs\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198885 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdbserver-sb-tls-certs\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.198921 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-scripts\") pod \"096b1286-863b-44aa-ac7e-5cd509d99950\" (UID: \"096b1286-863b-44aa-ac7e-5cd509d99950\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.199239 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-config" (OuterVolumeSpecName: "config") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200744 4900 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovs-rundir\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200763 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22b06684-2db9-4dca-aa15-53b22ca686d0-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200773 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85zfn\" (UniqueName: \"kubernetes.io/projected/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-kube-api-access-85zfn\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200784 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xc8r\" (UniqueName: \"kubernetes.io/projected/6ff6dcaf-b619-4169-9b36-81ee92264d71-kube-api-access-7xc8r\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200794 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200803 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b06684-2db9-4dca-aa15-53b22ca686d0-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.200815 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s79pk\" (UniqueName: \"kubernetes.io/projected/22b06684-2db9-4dca-aa15-53b22ca686d0-kube-api-access-s79pk\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.202056 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-scripts" (OuterVolumeSpecName: "scripts") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.202336 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.207596 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c353c599-462c-4196-a35c-7622350bb349/ovsdbserver-nb/0.log" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.207752 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.213961 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder5ba5-account-delete-kd6ml"] Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.237326 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron1777-account-delete-mh9jf"] Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.240521 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096b1286-863b-44aa-ac7e-5cd509d99950-kube-api-access-5lsjk" (OuterVolumeSpecName: "kube-api-access-5lsjk") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "kube-api-access-5lsjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.241021 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: W1202 14:07:23.255192 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff7f1bf7_3734_4c0e_afc2_d841cc97a529.slice/crio-aa62552ca88d9f76b70cc375eeb3f6280df5712e210d444d514bc2858fb9468c WatchSource:0}: Error finding container aa62552ca88d9f76b70cc375eeb3f6280df5712e210d444d514bc2858fb9468c: Status 404 returned error can't find the container with id aa62552ca88d9f76b70cc375eeb3f6280df5712e210d444d514bc2858fb9468c Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.284119 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapib12b-account-delete-rl98n"] Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.301812 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c353c599-462c-4196-a35c-7622350bb349-ovsdb-rundir\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302014 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8r5q\" (UniqueName: \"kubernetes.io/projected/c353c599-462c-4196-a35c-7622350bb349-kube-api-access-j8r5q\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302068 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-ovsdbserver-nb-tls-certs\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302087 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-metrics-certs-tls-certs\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302117 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302238 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-combined-ca-bundle\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302278 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-config\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302305 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-scripts\") pod \"c353c599-462c-4196-a35c-7622350bb349\" (UID: \"c353c599-462c-4196-a35c-7622350bb349\") " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302628 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c353c599-462c-4196-a35c-7622350bb349-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302729 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lsjk\" (UniqueName: \"kubernetes.io/projected/096b1286-863b-44aa-ac7e-5cd509d99950-kube-api-access-5lsjk\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302755 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302765 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.302774 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096b1286-863b-44aa-ac7e-5cd509d99950-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.303383 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-config" (OuterVolumeSpecName: "config") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.303766 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-scripts" (OuterVolumeSpecName: "scripts") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.309394 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell02641-account-delete-zn9bc"] Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.331477 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.336752 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c353c599-462c-4196-a35c-7622350bb349-kube-api-access-j8r5q" (OuterVolumeSpecName: "kube-api-access-j8r5q") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "kube-api-access-j8r5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: W1202 14:07:23.339531 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ab71b1_8ff6_488c_9401_9b63341b08dd.slice/crio-7baca948b0da00c1542ff63bc396fc96cd03852806ba07d7e41562963c5ba080 WatchSource:0}: Error finding container 7baca948b0da00c1542ff63bc396fc96cd03852806ba07d7e41562963c5ba080: Status 404 returned error can't find the container with id 7baca948b0da00c1542ff63bc396fc96cd03852806ba07d7e41562963c5ba080 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.405552 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.405586 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c353c599-462c-4196-a35c-7622350bb349-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.405595 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c353c599-462c-4196-a35c-7622350bb349-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.405618 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8r5q\" (UniqueName: \"kubernetes.io/projected/c353c599-462c-4196-a35c-7622350bb349-kube-api-access-j8r5q\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.405665 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.446246 4900 generic.go:334] "Generic (PLEG): container finished" podID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerID="81cc101728f5de7b2d97ff757885b53c0f26120c2800d3265c49f79eefd4fe58" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.446321 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8a322a0-752b-4ab1-9418-41c4747eebee","Type":"ContainerDied","Data":"81cc101728f5de7b2d97ff757885b53c0f26120c2800d3265c49f79eefd4fe58"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.453394 4900 generic.go:334] "Generic (PLEG): container finished" podID="5624f474-dd54-4580-b816-f238cc733b5a" containerID="23b3303be4e59abe74fc9f9832f7a048be4602b7bb3f5d0f5af2d708139fa0ab" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.453460 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5624f474-dd54-4580-b816-f238cc733b5a","Type":"ContainerDied","Data":"23b3303be4e59abe74fc9f9832f7a048be4602b7bb3f5d0f5af2d708139fa0ab"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.462449 4900 generic.go:334] "Generic (PLEG): container finished" podID="f79247d6-28ab-4234-a191-8799418aa3ea" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" exitCode=0 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.462530 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerDied","Data":"231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.469055 4900 generic.go:334] "Generic (PLEG): container finished" podID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerID="4735b662992f5922ce1cad03f6fa4def9946bffac92b4623b987fa581665c1db" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.469137 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79db7cb55d-4cs7x" event={"ID":"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571","Type":"ContainerDied","Data":"4735b662992f5922ce1cad03f6fa4def9946bffac92b4623b987fa581665c1db"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.471199 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapib12b-account-delete-rl98n" event={"ID":"60ab71b1-8ff6-488c-9401-9b63341b08dd","Type":"ContainerStarted","Data":"7baca948b0da00c1542ff63bc396fc96cd03852806ba07d7e41562963c5ba080"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.476375 4900 generic.go:334] "Generic (PLEG): container finished" podID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerID="58ddf684d77381c4c34646d9e1659713b257f24bcc73342818f13e7ed7d7268f" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.476441 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ff8b8d959-29bd8" event={"ID":"cb2b5602-0b26-4de1-ac2c-3606bd0aede3","Type":"ContainerDied","Data":"58ddf684d77381c4c34646d9e1659713b257f24bcc73342818f13e7ed7d7268f"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.481475 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" event={"ID":"7d0c0900-1e02-4dec-8e4c-a32f7f560a58","Type":"ContainerDied","Data":"7f0b2ddc5f13797dc9e5c810843a1614b6e864c69b7ddb9a6d8ad718c7ca8b55"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.481515 4900 scope.go:117] "RemoveContainer" containerID="ba3a70bf8272e8aca22d682efbb037a31975cc819affc7f24193c35cc53da8ce" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.481572 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-mqkt5" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.498221 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c353c599-462c-4196-a35c-7622350bb349/ovsdbserver-nb/0.log" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.498297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c353c599-462c-4196-a35c-7622350bb349","Type":"ContainerDied","Data":"fc28f1797409b2b80f3bcfbd0305eff7f5b9cecb3df5b8c82628029421a94fc8"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.498387 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.503746 4900 generic.go:334] "Generic (PLEG): container finished" podID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerID="780db50c9c1ada3d5ce136afa95e168fc995789ee6f6731c4c9529970d7dfd6e" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.503843 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"557a84eb-0882-44c1-b4db-7c8a19e1303d","Type":"ContainerDied","Data":"780db50c9c1ada3d5ce136afa95e168fc995789ee6f6731c4c9529970d7dfd6e"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.521092 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="92b595b2d89b2be8cfc2216546011c9aad218c2d134cbf0d7dd2eeded97e32ae" exitCode=0 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.521126 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="7f2a46fb8785892c4a865fae00d8ed6142ab75fae046b42634d84a99c5fcf69d" exitCode=0 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.521135 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="bf6dbc2d90f268fe7fed54cad255fdedb06111980e4d028a6b734115fcd4bff2" exitCode=0 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.521176 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"92b595b2d89b2be8cfc2216546011c9aad218c2d134cbf0d7dd2eeded97e32ae"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.521243 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"7f2a46fb8785892c4a865fae00d8ed6142ab75fae046b42634d84a99c5fcf69d"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.521262 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"bf6dbc2d90f268fe7fed54cad255fdedb06111980e4d028a6b734115fcd4bff2"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.522609 4900 scope.go:117] "RemoveContainer" containerID="6baaeb01e1d3e64c6d9939f8d07d0c37c56a64de01a9109241264a4db766ca79" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.524670 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_096b1286-863b-44aa-ac7e-5cd509d99950/ovsdbserver-sb/0.log" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.524807 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.526004 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"096b1286-863b-44aa-ac7e-5cd509d99950","Type":"ContainerDied","Data":"c8af75c8342908413c958c358029f92ea134aa7608aef6455d8f79d94bbe561e"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.538155 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron1777-account-delete-mh9jf" event={"ID":"8c322124-103a-40d2-a429-f018076f88ff","Type":"ContainerStarted","Data":"bcafa1e80d465da68c587ebc1409ef867ae714ffcacdaafebdcecd8dbd1899ba"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.546365 4900 generic.go:334] "Generic (PLEG): container finished" podID="9e062e50-5a22-45c0-adab-9f78980eb851" containerID="8b057749e05312ac8b867b23be07aedbc2a70bbb08e3cd32bd27a1b2582ac140" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.546414 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9e062e50-5a22-45c0-adab-9f78980eb851","Type":"ContainerDied","Data":"8b057749e05312ac8b867b23be07aedbc2a70bbb08e3cd32bd27a1b2582ac140"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.548928 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5ba5-account-delete-kd6ml" event={"ID":"ff7f1bf7-3734-4c0e-afc2-d841cc97a529","Type":"ContainerStarted","Data":"aa62552ca88d9f76b70cc375eeb3f6280df5712e210d444d514bc2858fb9468c"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.554090 4900 generic.go:334] "Generic (PLEG): container finished" podID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerID="9b9a1adc9bf6bc055b6124dbfb0cf0940f73cceff1fb98ba82f90ebb4fa7c9e3" exitCode=143 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.554150 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" event={"ID":"241c5e6f-d993-4c7a-90a2-1ae1786dbea2","Type":"ContainerDied","Data":"9b9a1adc9bf6bc055b6124dbfb0cf0940f73cceff1fb98ba82f90ebb4fa7c9e3"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.555463 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa999-account-delete-25gdq" event={"ID":"5d121fee-d98a-4dcd-ba07-1d4b2015460d","Type":"ContainerStarted","Data":"b42c2792f703fe7c00954295ae0cb411363acfaab7617c86614e6e1e9c9c4125"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.557325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance0080-account-delete-fz4nc" event={"ID":"df8a1411-7582-4f42-8b5a-3b97cebd9254","Type":"ContainerStarted","Data":"64e04d45d4f74716cc37ffb54b1ad5df6dfb0a76eca3363575fddff519fa0daf"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.558884 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.560537 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22b06684-2db9-4dca-aa15-53b22ca686d0" (UID: "22b06684-2db9-4dca-aa15-53b22ca686d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.561794 4900 generic.go:334] "Generic (PLEG): container finished" podID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerID="7e2a5e62fdfd6d58e261c3a59e316fbb35eb6ecbeecd1f0c5b424484dcb2b5d5" exitCode=0 Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.561882 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c49f994e-0d9a-4312-995d-d84d93f31f01","Type":"ContainerDied","Data":"7e2a5e62fdfd6d58e261c3a59e316fbb35eb6ecbeecd1f0c5b424484dcb2b5d5"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.573264 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.574210 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell02641-account-delete-zn9bc" event={"ID":"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c","Type":"ContainerStarted","Data":"5ef691c8856711c89a8a27c9a1db833a51f4e49b6a4ed9a4c5a87b2a2e1358e0"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.576506 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g4flw_22b06684-2db9-4dca-aa15-53b22ca686d0/openstack-network-exporter/0.log" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.576571 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g4flw" event={"ID":"22b06684-2db9-4dca-aa15-53b22ca686d0","Type":"ContainerDied","Data":"b5fc4b67d130f36f9b5d1bc656d626882211ebcd6de60627d2ad0c0d8e430f97"} Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.576617 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g4flw" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.578252 4900 scope.go:117] "RemoveContainer" containerID="a10158b3879ca4655fc8e6391c12e72686bdf7aae27551ad7cd381abaa366312" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.613043 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.613070 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.652597 4900 scope.go:117] "RemoveContainer" containerID="757b9daa69d51a10f6b4b9ded6c9bdd6924c1d21fda0bc963aa398285554bfc0" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.684974 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.701617 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.714945 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.714978 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.789395 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.798067 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6ff6dcaf-b619-4169-9b36-81ee92264d71" (UID: "6ff6dcaf-b619-4169-9b36-81ee92264d71"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.814968 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d0c0900-1e02-4dec-8e4c-a32f7f560a58" (UID: "7d0c0900-1e02-4dec-8e4c-a32f7f560a58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: E1202 14:07:23.816922 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:23 crc kubenswrapper[4900]: E1202 14:07:23.816995 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data podName:8db82600-180c-4114-8006-551e1b566ce5 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:27.816975452 +0000 UTC m=+1493.232789383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data") pod "rabbitmq-cell1-server-0" (UID: "8db82600-180c-4114-8006-551e1b566ce5") : configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.817184 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.817207 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.817219 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.834973 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ff6dcaf-b619-4169-9b36-81ee92264d71" (UID: "6ff6dcaf-b619-4169-9b36-81ee92264d71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.841678 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d0c0900-1e02-4dec-8e4c-a32f7f560a58" (UID: "7d0c0900-1e02-4dec-8e4c-a32f7f560a58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.843714 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.859627 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6ff6dcaf-b619-4169-9b36-81ee92264d71" (UID: "6ff6dcaf-b619-4169-9b36-81ee92264d71"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.881790 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.885020 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-config" (OuterVolumeSpecName: "config") pod "7d0c0900-1e02-4dec-8e4c-a32f7f560a58" (UID: "7d0c0900-1e02-4dec-8e4c-a32f7f560a58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.889778 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "096b1286-863b-44aa-ac7e-5cd509d99950" (UID: "096b1286-863b-44aa-ac7e-5cd509d99950"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.898443 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c353c599-462c-4196-a35c-7622350bb349" (UID: "c353c599-462c-4196-a35c-7622350bb349"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.903357 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "22b06684-2db9-4dca-aa15-53b22ca686d0" (UID: "22b06684-2db9-4dca-aa15-53b22ca686d0"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.904583 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7d0c0900-1e02-4dec-8e4c-a32f7f560a58" (UID: "7d0c0900-1e02-4dec-8e4c-a32f7f560a58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.916955 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d0c0900-1e02-4dec-8e4c-a32f7f560a58" (UID: "7d0c0900-1e02-4dec-8e4c-a32f7f560a58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918619 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b06684-2db9-4dca-aa15-53b22ca686d0-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918660 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918669 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918678 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c353c599-462c-4196-a35c-7622350bb349-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918687 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/096b1286-863b-44aa-ac7e-5cd509d99950-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918697 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918705 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918714 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6ff6dcaf-b619-4169-9b36-81ee92264d71-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918721 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918729 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4900]: I1202 14:07:23.918739 4900 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d0c0900-1e02-4dec-8e4c-a32f7f560a58-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.000385 4900 scope.go:117] "RemoveContainer" containerID="285f3512fc061d05a5061746933aff39280feb7e3c81097b9fc9c0b9cf0d32da" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.024752 4900 scope.go:117] "RemoveContainer" containerID="d77aef55bf78bad4c8ebeabb8b463139bc8041d2813719d17d146d9007571a93" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.050788 4900 scope.go:117] "RemoveContainer" containerID="6c10ca3ad8fceefa6060a64015e329b0a3d68e6291a95453dad98596043ab0ef" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.071014 4900 scope.go:117] "RemoveContainer" containerID="19705f019e43eb0fec10afe7795a2b153f6e8f761831faa811bf6940dbd55294" Dec 02 14:07:24 crc kubenswrapper[4900]: E1202 14:07:24.124424 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 02 14:07:24 crc kubenswrapper[4900]: E1202 14:07:24.124500 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data podName:e410de46-b373-431a-8486-21a6f1268e41 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:28.124485191 +0000 UTC m=+1493.540299042 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data") pod "rabbitmq-server-0" (UID: "e410de46-b373-431a-8486-21a6f1268e41") : configmap "rabbitmq-config-data" not found Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.134465 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mqkt5"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.145797 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-mqkt5"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.162219 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.256213 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-g4flw"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.268629 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-g4flw"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.313766 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.338779 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.351396 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.360031 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.390284 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.432512 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5b7cc7fc75-qdmm5"] Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.432902 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-httpd" containerID="cri-o://abf93b4af2ac9dd4692589722eae2b650829168ac6beae85279f3137d14413fd" gracePeriod=30 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.433546 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-server" containerID="cri-o://17feb893704561d9ac1181affdd466264eef60e1c8a36bef9d6bfe975eaed6b6" gracePeriod=30 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.588714 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa999-account-delete-25gdq" event={"ID":"5d121fee-d98a-4dcd-ba07-1d4b2015460d","Type":"ContainerStarted","Data":"bfdc120ba1bdb391645cff021176334436c917a7367050aa573f3c24192b27ef"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.592670 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5ba5-account-delete-kd6ml" event={"ID":"ff7f1bf7-3734-4c0e-afc2-d841cc97a529","Type":"ContainerStarted","Data":"1f5e54223736749411f69a0010169896be9ba0e0109828ad0b779c7a752eb6b2"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.594959 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican4c1f-account-delete-ksk64" event={"ID":"b27a1401-3ad1-40a3-9ce6-08cac86fef42","Type":"ContainerStarted","Data":"29c7e3b447f8e90946a42aa75df7262af18b7cbe70118f4026c7a4203713e35f"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.597116 4900 generic.go:334] "Generic (PLEG): container finished" podID="00825975-35eb-46d6-8aeb-753170564467" containerID="abf93b4af2ac9dd4692589722eae2b650829168ac6beae85279f3137d14413fd" exitCode=0 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.597157 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" event={"ID":"00825975-35eb-46d6-8aeb-753170564467","Type":"ContainerDied","Data":"abf93b4af2ac9dd4692589722eae2b650829168ac6beae85279f3137d14413fd"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.603012 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapib12b-account-delete-rl98n" event={"ID":"60ab71b1-8ff6-488c-9401-9b63341b08dd","Type":"ContainerStarted","Data":"9dd52ef17bb2168dda2bf0989c02753d795d4aedc98c658cd7c2b41c233f6f31"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.611175 4900 generic.go:334] "Generic (PLEG): container finished" podID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" containerID="9d4da9c7aa6120d5ccd058c5a050090e9130cfb769ef31b34092dd5d53ce8475" exitCode=0 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.611236 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ad78a256-27f0-46a9-addb-dbc7b41bebd2","Type":"ContainerDied","Data":"9d4da9c7aa6120d5ccd058c5a050090e9130cfb769ef31b34092dd5d53ce8475"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.620328 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placementa999-account-delete-25gdq" podStartSLOduration=4.620301594 podStartE2EDuration="4.620301594s" podCreationTimestamp="2025-12-02 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:24.612837624 +0000 UTC m=+1490.028651475" watchObservedRunningTime="2025-12-02 14:07:24.620301594 +0000 UTC m=+1490.036115445" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.630281 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="5e0242301bbd13a18a7ab682fc5ef7d58a6f6c86146abab5ab241882c022c72e" exitCode=0 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.630721 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"5e0242301bbd13a18a7ab682fc5ef7d58a6f6c86146abab5ab241882c022c72e"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.635158 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican4c1f-account-delete-ksk64" podStartSLOduration=4.63514503 podStartE2EDuration="4.63514503s" podCreationTimestamp="2025-12-02 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:24.633746141 +0000 UTC m=+1490.049559992" watchObservedRunningTime="2025-12-02 14:07:24.63514503 +0000 UTC m=+1490.050958871" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.639945 4900 generic.go:334] "Generic (PLEG): container finished" podID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerID="ff24395ff17544005ed3b0c813dfed8d4179e1e8e38687a4303ee6b98024dcbd" exitCode=0 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.640007 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d9bd66cf-nlpm2" event={"ID":"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5","Type":"ContainerDied","Data":"ff24395ff17544005ed3b0c813dfed8d4179e1e8e38687a4303ee6b98024dcbd"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.642871 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell02641-account-delete-zn9bc" event={"ID":"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c","Type":"ContainerStarted","Data":"f42e2dec24b72332041c0590468f513f2e9b290b1de836a0a22d3dd19494f14f"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.644775 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance0080-account-delete-fz4nc" event={"ID":"df8a1411-7582-4f42-8b5a-3b97cebd9254","Type":"ContainerStarted","Data":"bb84c573856baa8e5ade2df9297e29c43751017e92787104acb31419fd9eb3d4"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.648289 4900 generic.go:334] "Generic (PLEG): container finished" podID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerID="75b061719509895544c5101526474e3593e712a83ea8bf19f5d39b1e05838e7d" exitCode=0 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.648338 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c49f994e-0d9a-4312-995d-d84d93f31f01","Type":"ContainerDied","Data":"75b061719509895544c5101526474e3593e712a83ea8bf19f5d39b1e05838e7d"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.653078 4900 generic.go:334] "Generic (PLEG): container finished" podID="8c322124-103a-40d2-a429-f018076f88ff" containerID="a9f4af34055e2f5c51f621081e1b4146f6a4806f7186d24561b906becbbe4c8e" exitCode=0 Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.653178 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron1777-account-delete-mh9jf" event={"ID":"8c322124-103a-40d2-a429-f018076f88ff","Type":"ContainerDied","Data":"a9f4af34055e2f5c51f621081e1b4146f6a4806f7186d24561b906becbbe4c8e"} Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.654904 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapib12b-account-delete-rl98n" podStartSLOduration=3.6548791940000003 podStartE2EDuration="3.654879194s" podCreationTimestamp="2025-12-02 14:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:24.652212409 +0000 UTC m=+1490.068026260" watchObservedRunningTime="2025-12-02 14:07:24.654879194 +0000 UTC m=+1490.070693045" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.698024 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder5ba5-account-delete-kd6ml" podStartSLOduration=4.698004354 podStartE2EDuration="4.698004354s" podCreationTimestamp="2025-12-02 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:24.668327761 +0000 UTC m=+1490.084141622" watchObservedRunningTime="2025-12-02 14:07:24.698004354 +0000 UTC m=+1490.113818205" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.704349 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell02641-account-delete-zn9bc" podStartSLOduration=4.704338092 podStartE2EDuration="4.704338092s" podCreationTimestamp="2025-12-02 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:24.68968031 +0000 UTC m=+1490.105494181" watchObservedRunningTime="2025-12-02 14:07:24.704338092 +0000 UTC m=+1490.120151943" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.733457 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance0080-account-delete-fz4nc" podStartSLOduration=4.733427328 podStartE2EDuration="4.733427328s" podCreationTimestamp="2025-12-02 14:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:24.714559248 +0000 UTC m=+1490.130373099" watchObservedRunningTime="2025-12-02 14:07:24.733427328 +0000 UTC m=+1490.149241179" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.831872 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": dial tcp 10.217.0.167:8080: connect: connection refused" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.832182 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": dial tcp 10.217.0.167:8080: connect: connection refused" Dec 02 14:07:24 crc kubenswrapper[4900]: E1202 14:07:24.856332 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 02 14:07:24 crc kubenswrapper[4900]: E1202 14:07:24.868324 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 02 14:07:24 crc kubenswrapper[4900]: E1202 14:07:24.871938 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 02 14:07:24 crc kubenswrapper[4900]: E1202 14:07:24.872015 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="ovn-northd" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.919850 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" path="/var/lib/kubelet/pods/096b1286-863b-44aa-ac7e-5cd509d99950/volumes" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.920485 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b06684-2db9-4dca-aa15-53b22ca686d0" path="/var/lib/kubelet/pods/22b06684-2db9-4dca-aa15-53b22ca686d0/volumes" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.921287 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff6dcaf-b619-4169-9b36-81ee92264d71" path="/var/lib/kubelet/pods/6ff6dcaf-b619-4169-9b36-81ee92264d71/volumes" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.922253 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" path="/var/lib/kubelet/pods/7d0c0900-1e02-4dec-8e4c-a32f7f560a58/volumes" Dec 02 14:07:24 crc kubenswrapper[4900]: I1202 14:07:24.924055 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c353c599-462c-4196-a35c-7622350bb349" path="/var/lib/kubelet/pods/c353c599-462c-4196-a35c-7622350bb349/volumes" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.210849 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:52222->10.217.0.164:8776: read: connection reset by peer" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.321955 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79db7cb55d-4cs7x" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:34764->10.217.0.156:9311: read: connection reset by peer" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.321967 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79db7cb55d-4cs7x" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:34762->10.217.0.156:9311: read: connection reset by peer" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.332158 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.195:6080/vnc_lite.html\": dial tcp 10.217.0.195:6080: connect: connection refused" Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.509483 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.528472 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.533329 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.533491 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" containerName="nova-cell0-conductor-conductor" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.688830 4900 generic.go:334] "Generic (PLEG): container finished" podID="b27a1401-3ad1-40a3-9ce6-08cac86fef42" containerID="29c7e3b447f8e90946a42aa75df7262af18b7cbe70118f4026c7a4203713e35f" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.688918 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican4c1f-account-delete-ksk64" event={"ID":"b27a1401-3ad1-40a3-9ce6-08cac86fef42","Type":"ContainerDied","Data":"29c7e3b447f8e90946a42aa75df7262af18b7cbe70118f4026c7a4203713e35f"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.695546 4900 generic.go:334] "Generic (PLEG): container finished" podID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerID="32bcc73f011b1518d11ed9404adb699f4c8c5a3bbe633bebd10a6b2b3117dd08" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.695630 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79db7cb55d-4cs7x" event={"ID":"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571","Type":"ContainerDied","Data":"32bcc73f011b1518d11ed9404adb699f4c8c5a3bbe633bebd10a6b2b3117dd08"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.703481 4900 generic.go:334] "Generic (PLEG): container finished" podID="9e062e50-5a22-45c0-adab-9f78980eb851" containerID="974bb345b96f61cacf2fe7abb81edc513dd4982f9f5eaf42cece693b6d995322" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.703543 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9e062e50-5a22-45c0-adab-9f78980eb851","Type":"ContainerDied","Data":"974bb345b96f61cacf2fe7abb81edc513dd4982f9f5eaf42cece693b6d995322"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.707464 4900 generic.go:334] "Generic (PLEG): container finished" podID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerID="6554b3a343d89d6f8889d9cc9f50c9bd71066e708a1b0d388e5faa67db8d54dc" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.707569 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f6dffdfb8-h46pm" event={"ID":"7b0e50c7-752e-4879-a382-ff97500cfd89","Type":"ContainerDied","Data":"6554b3a343d89d6f8889d9cc9f50c9bd71066e708a1b0d388e5faa67db8d54dc"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.714230 4900 generic.go:334] "Generic (PLEG): container finished" podID="00825975-35eb-46d6-8aeb-753170564467" containerID="17feb893704561d9ac1181affdd466264eef60e1c8a36bef9d6bfe975eaed6b6" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.714295 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" event={"ID":"00825975-35eb-46d6-8aeb-753170564467","Type":"ContainerDied","Data":"17feb893704561d9ac1181affdd466264eef60e1c8a36bef9d6bfe975eaed6b6"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.717549 4900 generic.go:334] "Generic (PLEG): container finished" podID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerID="440f785e6ac340819ae403625dc734fd43ed1abbd0b52db9080939d07419abce" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.717613 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"557a84eb-0882-44c1-b4db-7c8a19e1303d","Type":"ContainerDied","Data":"440f785e6ac340819ae403625dc734fd43ed1abbd0b52db9080939d07419abce"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.721628 4900 generic.go:334] "Generic (PLEG): container finished" podID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerID="7eb9fd41a54b8a69b7cc6d75b1be55aec13e8d931357769f99ce0bad86542d63" exitCode=0 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.721699 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a302619-4a69-4e62-b7cb-6812b771f6d4","Type":"ContainerDied","Data":"7eb9fd41a54b8a69b7cc6d75b1be55aec13e8d931357769f99ce0bad86542d63"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.723745 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c49f994e-0d9a-4312-995d-d84d93f31f01","Type":"ContainerDied","Data":"91bf73d9fd7b730de83c0c5bdec437554ab5e64ee9f986dca6b070f5d83dba2c"} Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.723787 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bf73d9fd7b730de83c0c5bdec437554ab5e64ee9f986dca6b070f5d83dba2c" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.771959 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.772225 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-central-agent" containerID="cri-o://8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.775297 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="proxy-httpd" containerID="cri-o://5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.777323 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-notification-agent" containerID="cri-o://75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.777438 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="sg-core" containerID="cri-o://2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.810206 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.812550 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.812755 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" containerName="kube-state-metrics" containerID="cri-o://2cfb4aee9c0e60e8a52f829f41a8643aa512a3a323f16d80fe672fa39d031a24" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.832022 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.945870 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:07:25 crc kubenswrapper[4900]: E1202 14:07:25.946114 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ed8121d7-7b10-44c5-9b43-9088b198f34c" containerName="nova-scheduler-scheduler" Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.956035 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.956250 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="812fa799-d734-4151-b87f-25d638295714" containerName="memcached" containerID="cri-o://32a45c77a6ef2f6050d3ead873d9d3fa8d6013c262c2306ca091361bfe251ace" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4900]: I1202 14:07:25.991549 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d9ld4"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.002945 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d9ld4"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.008763 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-smkbq"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.017161 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-smkbq"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.019162 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.021877 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6595dffb96-c4mrx"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.022113 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6595dffb96-c4mrx" podUID="d42b962f-20f0-43d1-a1c4-c16c9392ec82" containerName="keystone-api" containerID="cri-o://e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd" gracePeriod=30 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.028417 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.028738 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.191:8081/readyz\": dial tcp 10.217.0.191:8081: connect: connection refused" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.179470 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lzg8p"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.229355 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lzg8p"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.251942 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data\") pod \"c49f994e-0d9a-4312-995d-d84d93f31f01\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.270396 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data-custom\") pod \"c49f994e-0d9a-4312-995d-d84d93f31f01\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.259507 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell02641-account-delete-zn9bc"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.289049 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqcm"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.289143 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c49f994e-0d9a-4312-995d-d84d93f31f01" (UID: "c49f994e-0d9a-4312-995d-d84d93f31f01"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.291877 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-scripts\") pod \"c49f994e-0d9a-4312-995d-d84d93f31f01\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.291933 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgphn\" (UniqueName: \"kubernetes.io/projected/c49f994e-0d9a-4312-995d-d84d93f31f01-kube-api-access-jgphn\") pod \"c49f994e-0d9a-4312-995d-d84d93f31f01\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.291952 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c49f994e-0d9a-4312-995d-d84d93f31f01-etc-machine-id\") pod \"c49f994e-0d9a-4312-995d-d84d93f31f01\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292042 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-combined-ca-bundle\") pod \"c49f994e-0d9a-4312-995d-d84d93f31f01\" (UID: \"c49f994e-0d9a-4312-995d-d84d93f31f01\") " Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292192 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="probe" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292223 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="probe" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292248 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292253 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292264 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="ovsdbserver-sb" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292271 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="ovsdbserver-sb" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292283 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292288 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292296 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="dnsmasq-dns" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292301 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="dnsmasq-dns" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292329 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b06684-2db9-4dca-aa15-53b22ca686d0" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292335 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b06684-2db9-4dca-aa15-53b22ca686d0" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292345 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="cinder-scheduler" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292350 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="cinder-scheduler" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292360 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="init" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292365 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="init" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.292371 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="ovsdbserver-nb" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292376 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="ovsdbserver-nb" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292545 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="probe" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292555 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b06684-2db9-4dca-aa15-53b22ca686d0" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292565 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="ovsdbserver-sb" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292576 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="ovsdbserver-nb" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292586 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" containerName="cinder-scheduler" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292593 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c353c599-462c-4196-a35c-7622350bb349" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292621 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0c0900-1e02-4dec-8e4c-a32f7f560a58" containerName="dnsmasq-dns" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292632 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="096b1286-863b-44aa-ac7e-5cd509d99950" containerName="openstack-network-exporter" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.292736 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.299457 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c49f994e-0d9a-4312-995d-d84d93f31f01-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c49f994e-0d9a-4312-995d-d84d93f31f01" (UID: "c49f994e-0d9a-4312-995d-d84d93f31f01"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.300825 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqcm"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.300949 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.308770 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2641-account-create-update-42tfb"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.312741 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2641-account-create-update-42tfb"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.313752 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-scripts" (OuterVolumeSpecName: "scripts") pod "c49f994e-0d9a-4312-995d-d84d93f31f01" (UID: "c49f994e-0d9a-4312-995d-d84d93f31f01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.322051 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49f994e-0d9a-4312-995d-d84d93f31f01-kube-api-access-jgphn" (OuterVolumeSpecName: "kube-api-access-jgphn") pod "c49f994e-0d9a-4312-995d-d84d93f31f01" (UID: "c49f994e-0d9a-4312-995d-d84d93f31f01"). InnerVolumeSpecName "kube-api-access-jgphn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.335451 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ngxzq"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.362187 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ngxzq"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.378674 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f5f9-account-create-update-29pcz"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.387927 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f5f9-account-create-update-29pcz"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.394779 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-259h6\" (UniqueName: \"kubernetes.io/projected/4d1c4eca-52c1-4143-832e-b377e4415feb-kube-api-access-259h6\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.394883 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-utilities\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.394964 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-catalog-content\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.395013 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.395025 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgphn\" (UniqueName: \"kubernetes.io/projected/c49f994e-0d9a-4312-995d-d84d93f31f01-kube-api-access-jgphn\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.395037 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c49f994e-0d9a-4312-995d-d84d93f31f01-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.396491 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-s9gw6"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.403442 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-s9gw6"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.418253 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapib12b-account-delete-rl98n"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.424583 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b12b-account-create-update-26q6g"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.427690 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerName="galera" containerID="cri-o://30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9" gracePeriod=30 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.431186 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b12b-account-create-update-26q6g"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.457695 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c49f994e-0d9a-4312-995d-d84d93f31f01" (UID: "c49f994e-0d9a-4312-995d-d84d93f31f01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.483721 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data" (OuterVolumeSpecName: "config-data") pod "c49f994e-0d9a-4312-995d-d84d93f31f01" (UID: "c49f994e-0d9a-4312-995d-d84d93f31f01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.501485 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-catalog-content\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.501568 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-259h6\" (UniqueName: \"kubernetes.io/projected/4d1c4eca-52c1-4143-832e-b377e4415feb-kube-api-access-259h6\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.501653 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-utilities\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.501709 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.501721 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f994e-0d9a-4312-995d-d84d93f31f01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.502269 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-utilities\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.502663 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-catalog-content\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.543222 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-259h6\" (UniqueName: \"kubernetes.io/projected/4d1c4eca-52c1-4143-832e-b377e4415feb-kube-api-access-259h6\") pod \"redhat-marketplace-2hqcm\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.599609 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.606227 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.625777 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.705941 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-config-data\") pod \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.705989 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-vencrypt-tls-certs\") pod \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706012 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-nova-novncproxy-tls-certs\") pod \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706066 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-combined-ca-bundle\") pod \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706091 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgzrn\" (UniqueName: \"kubernetes.io/projected/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-kube-api-access-sgzrn\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706164 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706209 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-internal-tls-certs\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706247 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-logs\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706268 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sg5w\" (UniqueName: \"kubernetes.io/projected/ad78a256-27f0-46a9-addb-dbc7b41bebd2-kube-api-access-5sg5w\") pod \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\" (UID: \"ad78a256-27f0-46a9-addb-dbc7b41bebd2\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706296 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-combined-ca-bundle\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706333 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-public-tls-certs\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.706356 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data-custom\") pod \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\" (UID: \"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.710586 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-logs" (OuterVolumeSpecName: "logs") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.747951 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad78a256-27f0-46a9-addb-dbc7b41bebd2-kube-api-access-5sg5w" (OuterVolumeSpecName: "kube-api-access-5sg5w") pod "ad78a256-27f0-46a9-addb-dbc7b41bebd2" (UID: "ad78a256-27f0-46a9-addb-dbc7b41bebd2"). InnerVolumeSpecName "kube-api-access-5sg5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.765617 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-config-data" (OuterVolumeSpecName: "config-data") pod "ad78a256-27f0-46a9-addb-dbc7b41bebd2" (UID: "ad78a256-27f0-46a9-addb-dbc7b41bebd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.767823 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.768805 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.770290 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.777806 4900 generic.go:334] "Generic (PLEG): container finished" podID="e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" containerID="f42e2dec24b72332041c0590468f513f2e9b290b1de836a0a22d3dd19494f14f" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.777854 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell02641-account-delete-zn9bc" event={"ID":"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c","Type":"ContainerDied","Data":"f42e2dec24b72332041c0590468f513f2e9b290b1de836a0a22d3dd19494f14f"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.790999 4900 generic.go:334] "Generic (PLEG): container finished" podID="60ab71b1-8ff6-488c-9401-9b63341b08dd" containerID="9dd52ef17bb2168dda2bf0989c02753d795d4aedc98c658cd7c2b41c233f6f31" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.791070 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapib12b-account-delete-rl98n" event={"ID":"60ab71b1-8ff6-488c-9401-9b63341b08dd","Type":"ContainerDied","Data":"9dd52ef17bb2168dda2bf0989c02753d795d4aedc98c658cd7c2b41c233f6f31"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.796784 4900 generic.go:334] "Generic (PLEG): container finished" podID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerID="683ad46b79d8da86e3dc06d5fc634651aa5b590466fe3ab013890ca87d56975d" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.796834 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" event={"ID":"241c5e6f-d993-4c7a-90a2-1ae1786dbea2","Type":"ContainerDied","Data":"683ad46b79d8da86e3dc06d5fc634651aa5b590466fe3ab013890ca87d56975d"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.803771 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-kube-api-access-sgzrn" (OuterVolumeSpecName: "kube-api-access-sgzrn") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "kube-api-access-sgzrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.806274 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data" (OuterVolumeSpecName: "config-data") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.806423 4900 generic.go:334] "Generic (PLEG): container finished" podID="5d121fee-d98a-4dcd-ba07-1d4b2015460d" containerID="bfdc120ba1bdb391645cff021176334436c917a7367050aa573f3c24192b27ef" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.806479 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa999-account-delete-25gdq" event={"ID":"5d121fee-d98a-4dcd-ba07-1d4b2015460d","Type":"ContainerDied","Data":"bfdc120ba1bdb391645cff021176334436c917a7367050aa573f3c24192b27ef"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.812881 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.812903 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgzrn\" (UniqueName: \"kubernetes.io/projected/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-kube-api-access-sgzrn\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.812915 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.812923 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.812931 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sg5w\" (UniqueName: \"kubernetes.io/projected/ad78a256-27f0-46a9-addb-dbc7b41bebd2-kube-api-access-5sg5w\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.812940 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.827516 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.834838 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.845784 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad78a256-27f0-46a9-addb-dbc7b41bebd2" (UID: "ad78a256-27f0-46a9-addb-dbc7b41bebd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.848588 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" (UID: "9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.850215 4900 generic.go:334] "Generic (PLEG): container finished" podID="533da492-8f1f-4593-86bd-8d5b316bb897" containerID="5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.850238 4900 generic.go:334] "Generic (PLEG): container finished" podID="533da492-8f1f-4593-86bd-8d5b316bb897" containerID="2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4" exitCode=2 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.850245 4900 generic.go:334] "Generic (PLEG): container finished" podID="533da492-8f1f-4593-86bd-8d5b316bb897" containerID="8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.850284 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerDied","Data":"5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.850313 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerDied","Data":"2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.850324 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerDied","Data":"8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.859676 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f6dffdfb8-h46pm" event={"ID":"7b0e50c7-752e-4879-a382-ff97500cfd89","Type":"ContainerDied","Data":"7d949b736f7e4ed227f9fc07962b37278f3ae5880a6f7739e8978c75cabf68c8"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.859726 4900 scope.go:117] "RemoveContainer" containerID="6554b3a343d89d6f8889d9cc9f50c9bd71066e708a1b0d388e5faa67db8d54dc" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.859854 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f6dffdfb8-h46pm" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.869663 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "ad78a256-27f0-46a9-addb-dbc7b41bebd2" (UID: "ad78a256-27f0-46a9-addb-dbc7b41bebd2"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.886685 4900 generic.go:334] "Generic (PLEG): container finished" podID="5624f474-dd54-4580-b816-f238cc733b5a" containerID="111a5b135a319c9f662db3b3e6a11bfecf64162e38bdb1b58f02ab519892e209" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.886752 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5624f474-dd54-4580-b816-f238cc733b5a","Type":"ContainerDied","Data":"111a5b135a319c9f662db3b3e6a11bfecf64162e38bdb1b58f02ab519892e209"} Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.896902 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ad78a256-27f0-46a9-addb-dbc7b41bebd2" (UID: "ad78a256-27f0-46a9-addb-dbc7b41bebd2"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.900800 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9 is running failed: container process not found" containerID="f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.903416 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9 is running failed: container process not found" containerID="f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.905441 4900 generic.go:334] "Generic (PLEG): container finished" podID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" containerID="2cfb4aee9c0e60e8a52f829f41a8643aa512a3a323f16d80fe672fa39d031a24" exitCode=2 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.905498 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97e183cf-c0fe-4f94-9c03-7f8fa792c4af","Type":"ContainerDied","Data":"2cfb4aee9c0e60e8a52f829f41a8643aa512a3a323f16d80fe672fa39d031a24"} Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.905535 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9 is running failed: container process not found" containerID="f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.905559 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="galera" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917357 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-combined-ca-bundle\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917426 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcgvs\" (UniqueName: \"kubernetes.io/projected/7b0e50c7-752e-4879-a382-ff97500cfd89-kube-api-access-jcgvs\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917518 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c322124-103a-40d2-a429-f018076f88ff-operator-scripts\") pod \"8c322124-103a-40d2-a429-f018076f88ff\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917542 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-public-tls-certs\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917625 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0e50c7-752e-4879-a382-ff97500cfd89-logs\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917687 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-config-data\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917710 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-scripts\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917726 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf57v\" (UniqueName: \"kubernetes.io/projected/8c322124-103a-40d2-a429-f018076f88ff-kube-api-access-zf57v\") pod \"8c322124-103a-40d2-a429-f018076f88ff\" (UID: \"8c322124-103a-40d2-a429-f018076f88ff\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.917763 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-internal-tls-certs\") pod \"7b0e50c7-752e-4879-a382-ff97500cfd89\" (UID: \"7b0e50c7-752e-4879-a382-ff97500cfd89\") " Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918176 4900 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918189 4900 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918200 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad78a256-27f0-46a9-addb-dbc7b41bebd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918209 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918218 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918226 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.918338 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c322124-103a-40d2-a429-f018076f88ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c322124-103a-40d2-a429-f018076f88ff" (UID: "8c322124-103a-40d2-a429-f018076f88ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.920283 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0e50c7-752e-4879-a382-ff97500cfd89-logs" (OuterVolumeSpecName: "logs") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.922131 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c322124-103a-40d2-a429-f018076f88ff-kube-api-access-zf57v" (OuterVolumeSpecName: "kube-api-access-zf57v") pod "8c322124-103a-40d2-a429-f018076f88ff" (UID: "8c322124-103a-40d2-a429-f018076f88ff"). InnerVolumeSpecName "kube-api-access-zf57v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.946012 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.947874 4900 generic.go:334] "Generic (PLEG): container finished" podID="ed8121d7-7b10-44c5-9b43-9088b198f34c" containerID="88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f" exitCode=0 Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.950470 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-scripts" (OuterVolumeSpecName: "scripts") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.958218 4900 scope.go:117] "RemoveContainer" containerID="72b6e3300d0787fe99949f1eade7bb409bf6f76d9bb245ec44e8976a1315be81" Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.958427 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.997865 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 14:07:26 crc kubenswrapper[4900]: E1202 14:07:26.997934 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" containerName="nova-cell1-conductor-conductor" Dec 02 14:07:26 crc kubenswrapper[4900]: I1202 14:07:26.998426 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0e50c7-752e-4879-a382-ff97500cfd89-kube-api-access-jcgvs" (OuterVolumeSpecName: "kube-api-access-jcgvs") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "kube-api-access-jcgvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.008087 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff7f1bf7-3734-4c0e-afc2-d841cc97a529" containerID="1f5e54223736749411f69a0010169896be9ba0e0109828ad0b779c7a752eb6b2" exitCode=0 Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.012155 4900 generic.go:334] "Generic (PLEG): container finished" podID="df8a1411-7582-4f42-8b5a-3b97cebd9254" containerID="bb84c573856baa8e5ade2df9297e29c43751017e92787104acb31419fd9eb3d4" exitCode=0 Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.014247 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.014876 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron1777-account-delete-mh9jf" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.016477 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79db7cb55d-4cs7x" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.026540 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b0e50c7-752e-4879-a382-ff97500cfd89-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.026564 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.026574 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf57v\" (UniqueName: \"kubernetes.io/projected/8c322124-103a-40d2-a429-f018076f88ff-kube-api-access-zf57v\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.026584 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcgvs\" (UniqueName: \"kubernetes.io/projected/7b0e50c7-752e-4879-a382-ff97500cfd89-kube-api-access-jcgvs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.026592 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c322124-103a-40d2-a429-f018076f88ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.027528 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26491a11-ebc3-4790-af52-2459d18e0a5b" path="/var/lib/kubelet/pods/26491a11-ebc3-4790-af52-2459d18e0a5b/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.028826 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f3a381-1653-4c21-929c-86e764024d0c" path="/var/lib/kubelet/pods/38f3a381-1653-4c21-929c-86e764024d0c/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.031390 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac832e52-24a1-4e72-84ad-47259077412f" path="/var/lib/kubelet/pods/ac832e52-24a1-4e72-84ad-47259077412f/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.034993 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5bef1d4-9515-4138-84b8-da85155c6f5a" path="/var/lib/kubelet/pods/b5bef1d4-9515-4138-84b8-da85155c6f5a/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.035495 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc006dae-0e38-4f4a-b689-40cfa0bacf72" path="/var/lib/kubelet/pods/bc006dae-0e38-4f4a-b689-40cfa0bacf72/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.036012 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da048a5a-7760-4892-8b7d-8496bcba1142" path="/var/lib/kubelet/pods/da048a5a-7760-4892-8b7d-8496bcba1142/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.037232 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e7b745-29c6-452c-b9ff-392b476fddd1" path="/var/lib/kubelet/pods/e2e7b745-29c6-452c-b9ff-392b476fddd1/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.037745 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3c7373-c9f5-4957-abaa-e2719e654d2b" path="/var/lib/kubelet/pods/fa3c7373-c9f5-4957-abaa-e2719e654d2b/volumes" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.046224 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed8121d7-7b10-44c5-9b43-9088b198f34c","Type":"ContainerDied","Data":"88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f"} Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.046270 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5ba5-account-delete-kd6ml" event={"ID":"ff7f1bf7-3734-4c0e-afc2-d841cc97a529","Type":"ContainerDied","Data":"1f5e54223736749411f69a0010169896be9ba0e0109828ad0b779c7a752eb6b2"} Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.046289 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance0080-account-delete-fz4nc" event={"ID":"df8a1411-7582-4f42-8b5a-3b97cebd9254","Type":"ContainerDied","Data":"bb84c573856baa8e5ade2df9297e29c43751017e92787104acb31419fd9eb3d4"} Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.046304 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron1777-account-delete-mh9jf" event={"ID":"8c322124-103a-40d2-a429-f018076f88ff","Type":"ContainerDied","Data":"bcafa1e80d465da68c587ebc1409ef867ae714ffcacdaafebdcecd8dbd1899ba"} Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.046316 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcafa1e80d465da68c587ebc1409ef867ae714ffcacdaafebdcecd8dbd1899ba" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.046325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79db7cb55d-4cs7x" event={"ID":"9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571","Type":"ContainerDied","Data":"1e1869fc88c75d6f11a89a6d01f794c6ff750d4790f3a4705e89b8d389ca2081"} Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.056019 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.058969 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.061347 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.063949 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ad78a256-27f0-46a9-addb-dbc7b41bebd2","Type":"ContainerDied","Data":"1a0cfda2f24ef738ef4727e6d1a46c0bdf990f114f0d6688927d0693e3aa874b"} Dec 02 14:07:27 crc kubenswrapper[4900]: I1202 14:07:27.065358 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127353 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127621 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-logs\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127687 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-internal-tls-certs\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127721 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data-custom\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127748 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-certs\") pod \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127782 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-httpd-run\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127802 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/557a84eb-0882-44c1-b4db-7c8a19e1303d-etc-machine-id\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127874 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-combined-ca-bundle\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127898 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-scripts\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127920 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95rj\" (UniqueName: \"kubernetes.io/projected/557a84eb-0882-44c1-b4db-7c8a19e1303d-kube-api-access-g95rj\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127938 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-public-tls-certs\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.127959 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128014 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-config\") pod \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128037 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/557a84eb-0882-44c1-b4db-7c8a19e1303d-logs\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128054 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-scripts\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128083 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qdpt\" (UniqueName: \"kubernetes.io/projected/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-api-access-7qdpt\") pod \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128118 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-combined-ca-bundle\") pod \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\" (UID: \"97e183cf-c0fe-4f94-9c03-7f8fa792c4af\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128154 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128168 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-config-data\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128203 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-combined-ca-bundle\") pod \"557a84eb-0882-44c1-b4db-7c8a19e1303d\" (UID: \"557a84eb-0882-44c1-b4db-7c8a19e1303d\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128225 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49vkm\" (UniqueName: \"kubernetes.io/projected/9e062e50-5a22-45c0-adab-9f78980eb851-kube-api-access-49vkm\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.128278 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-internal-tls-certs\") pod \"9e062e50-5a22-45c0-adab-9f78980eb851\" (UID: \"9e062e50-5a22-45c0-adab-9f78980eb851\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.129091 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/557a84eb-0882-44c1-b4db-7c8a19e1303d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.129687 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.129777 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-logs" (OuterVolumeSpecName: "logs") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.130341 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.130356 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.130366 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e062e50-5a22-45c0-adab-9f78980eb851-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.133760 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/557a84eb-0882-44c1-b4db-7c8a19e1303d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.144138 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557a84eb-0882-44c1-b4db-7c8a19e1303d-logs" (OuterVolumeSpecName: "logs") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.145676 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-scripts" (OuterVolumeSpecName: "scripts") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.163374 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.169328 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e062e50-5a22-45c0-adab-9f78980eb851-kube-api-access-49vkm" (OuterVolumeSpecName: "kube-api-access-49vkm") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "kube-api-access-49vkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.176832 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.176847 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-api-access-7qdpt" (OuterVolumeSpecName: "kube-api-access-7qdpt") pod "97e183cf-c0fe-4f94-9c03-7f8fa792c4af" (UID: "97e183cf-c0fe-4f94-9c03-7f8fa792c4af"). InnerVolumeSpecName "kube-api-access-7qdpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.177766 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557a84eb-0882-44c1-b4db-7c8a19e1303d-kube-api-access-g95rj" (OuterVolumeSpecName: "kube-api-access-g95rj") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "kube-api-access-g95rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.178691 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-config-data" (OuterVolumeSpecName: "config-data") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.190752 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-scripts" (OuterVolumeSpecName: "scripts") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235302 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/557a84eb-0882-44c1-b4db-7c8a19e1303d-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235325 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235336 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qdpt\" (UniqueName: \"kubernetes.io/projected/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-api-access-7qdpt\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235344 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49vkm\" (UniqueName: \"kubernetes.io/projected/9e062e50-5a22-45c0-adab-9f78980eb851-kube-api-access-49vkm\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235352 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235362 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235377 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235388 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95rj\" (UniqueName: \"kubernetes.io/projected/557a84eb-0882-44c1-b4db-7c8a19e1303d-kube-api-access-g95rj\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.235410 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.263766 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.267321 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.269730 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.288387 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97e183cf-c0fe-4f94-9c03-7f8fa792c4af" (UID: "97e183cf-c0fe-4f94-9c03-7f8fa792c4af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.304784 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="812fa799-d734-4151-b87f-25d638295714" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.106:11211: connect: connection refused" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.332931 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "97e183cf-c0fe-4f94-9c03-7f8fa792c4af" (UID: "97e183cf-c0fe-4f94-9c03-7f8fa792c4af"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.339922 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.339952 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.339963 4900 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.339974 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.339985 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.359459 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "97e183cf-c0fe-4f94-9c03-7f8fa792c4af" (UID: "97e183cf-c0fe-4f94-9c03-7f8fa792c4af"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.374513 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.381029 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.391804 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.394739 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7b0e50c7-752e-4879-a382-ff97500cfd89" (UID: "7b0e50c7-752e-4879-a382-ff97500cfd89"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.395495 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.407437 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data" (OuterVolumeSpecName: "config-data") pod "557a84eb-0882-44c1-b4db-7c8a19e1303d" (UID: "557a84eb-0882-44c1-b4db-7c8a19e1303d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.409113 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-config-data" (OuterVolumeSpecName: "config-data") pod "9e062e50-5a22-45c0-adab-9f78980eb851" (UID: "9e062e50-5a22-45c0-adab-9f78980eb851"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441406 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441432 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441443 4900 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/97e183cf-c0fe-4f94-9c03-7f8fa792c4af-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441453 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441462 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441471 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b0e50c7-752e-4879-a382-ff97500cfd89-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441480 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e062e50-5a22-45c0-adab-9f78980eb851-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.441488 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/557a84eb-0882-44c1-b4db-7c8a19e1303d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.473125 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.486448 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.536731 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542783 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-logs\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542821 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-nova-metadata-tls-certs\") pod \"5624f474-dd54-4580-b816-f238cc733b5a\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542847 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb26b\" (UniqueName: \"kubernetes.io/projected/1a302619-4a69-4e62-b7cb-6812b771f6d4-kube-api-access-sb26b\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542886 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-public-tls-certs\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542918 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-combined-ca-bundle\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542933 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.542999 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-httpd-run\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.543016 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-config-data\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.543040 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-scripts\") pod \"1a302619-4a69-4e62-b7cb-6812b771f6d4\" (UID: \"1a302619-4a69-4e62-b7cb-6812b771f6d4\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.543059 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b58h\" (UniqueName: \"kubernetes.io/projected/5624f474-dd54-4580-b816-f238cc733b5a-kube-api-access-7b58h\") pod \"5624f474-dd54-4580-b816-f238cc733b5a\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.543100 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-combined-ca-bundle\") pod \"5624f474-dd54-4580-b816-f238cc733b5a\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.543213 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-config-data\") pod \"5624f474-dd54-4580-b816-f238cc733b5a\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.543297 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5624f474-dd54-4580-b816-f238cc733b5a-logs\") pod \"5624f474-dd54-4580-b816-f238cc733b5a\" (UID: \"5624f474-dd54-4580-b816-f238cc733b5a\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.544058 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5624f474-dd54-4580-b816-f238cc733b5a-logs" (OuterVolumeSpecName: "logs") pod "5624f474-dd54-4580-b816-f238cc733b5a" (UID: "5624f474-dd54-4580-b816-f238cc733b5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.544417 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-logs" (OuterVolumeSpecName: "logs") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.546080 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.557421 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a302619-4a69-4e62-b7cb-6812b771f6d4-kube-api-access-sb26b" (OuterVolumeSpecName: "kube-api-access-sb26b") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "kube-api-access-sb26b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.559138 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.559986 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.576674 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5624f474-dd54-4580-b816-f238cc733b5a-kube-api-access-7b58h" (OuterVolumeSpecName: "kube-api-access-7b58h") pod "5624f474-dd54-4580-b816-f238cc733b5a" (UID: "5624f474-dd54-4580-b816-f238cc733b5a"). InnerVolumeSpecName "kube-api-access-7b58h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.583246 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79db7cb55d-4cs7x"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.583662 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-scripts" (OuterVolumeSpecName: "scripts") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.593731 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-79db7cb55d-4cs7x"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.602270 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f6dffdfb8-h46pm"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.606804 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.608347 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6f6dffdfb8-h46pm"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.612221 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-config-data" (OuterVolumeSpecName: "config-data") pod "5624f474-dd54-4580-b816-f238cc733b5a" (UID: "5624f474-dd54-4580-b816-f238cc733b5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.614031 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.623855 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.624467 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5624f474-dd54-4580-b816-f238cc733b5a" (UID: "5624f474-dd54-4580-b816-f238cc733b5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.637397 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-config-data" (OuterVolumeSpecName: "config-data") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.645922 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.645964 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.645975 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.645985 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.645995 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.646004 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b58h\" (UniqueName: \"kubernetes.io/projected/5624f474-dd54-4580-b816-f238cc733b5a-kube-api-access-7b58h\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.646015 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.646023 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.646031 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5624f474-dd54-4580-b816-f238cc733b5a-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.646039 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a302619-4a69-4e62-b7cb-6812b771f6d4-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.646046 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb26b\" (UniqueName: \"kubernetes.io/projected/1a302619-4a69-4e62-b7cb-6812b771f6d4-kube-api-access-sb26b\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.647558 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5624f474-dd54-4580-b816-f238cc733b5a" (UID: "5624f474-dd54-4580-b816-f238cc733b5a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.662995 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.663183 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a302619-4a69-4e62-b7cb-6812b771f6d4" (UID: "1a302619-4a69-4e62-b7cb-6812b771f6d4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.688895 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.689123 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.689297 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.689319 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.690056 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.691162 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.693884 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.693911 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.747230 4900 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5624f474-dd54-4580-b816-f238cc733b5a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.747274 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a302619-4a69-4e62-b7cb-6812b771f6d4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.747298 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.759535 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gn6td" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" probeResult="failure" output="command timed out" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.813771 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gn6td" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" probeResult="failure" output=< Dec 02 14:07:28 crc kubenswrapper[4900]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Dec 02 14:07:28 crc kubenswrapper[4900]: > Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.849245 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:27.849330 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data podName:8db82600-180c-4114-8006-551e1b566ce5 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:35.849308151 +0000 UTC m=+1501.265122002 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data") pod "rabbitmq-cell1-server-0" (UID: "8db82600-180c-4114-8006-551e1b566ce5") : configmap "rabbitmq-cell1-config-data" not found Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:27.900855 4900 scope.go:117] "RemoveContainer" containerID="32bcc73f011b1518d11ed9404adb699f4c8c5a3bbe633bebd10a6b2b3117dd08" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.074865 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell02641-account-delete-zn9bc" event={"ID":"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c","Type":"ContainerDied","Data":"5ef691c8856711c89a8a27c9a1db833a51f4e49b6a4ed9a4c5a87b2a2e1358e0"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.075147 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef691c8856711c89a8a27c9a1db833a51f4e49b6a4ed9a4c5a87b2a2e1358e0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.080778 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.081707 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"557a84eb-0882-44c1-b4db-7c8a19e1303d","Type":"ContainerDied","Data":"5a3f65bd50c619be17abb40ae267765cd3c4242d17696dfc1b11ad8e4c6cff2f"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.096058 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1a302619-4a69-4e62-b7cb-6812b771f6d4","Type":"ContainerDied","Data":"dfb29081b0c147c1598ac53855a5d14fb7c89cbc4ff27d4e59402a2dc5b280a0"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.096065 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.106370 4900 generic.go:334] "Generic (PLEG): container finished" podID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" containerID="7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61" exitCode=0 Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.106479 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6","Type":"ContainerDied","Data":"7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.120114 4900 scope.go:117] "RemoveContainer" containerID="4735b662992f5922ce1cad03f6fa4def9946bffac92b4623b987fa581665c1db" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.121995 4900 generic.go:334] "Generic (PLEG): container finished" podID="812fa799-d734-4151-b87f-25d638295714" containerID="32a45c77a6ef2f6050d3ead873d9d3fa8d6013c262c2306ca091361bfe251ace" exitCode=0 Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.122052 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"812fa799-d734-4151-b87f-25d638295714","Type":"ContainerDied","Data":"32a45c77a6ef2f6050d3ead873d9d3fa8d6013c262c2306ca091361bfe251ace"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.124038 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" event={"ID":"241c5e6f-d993-4c7a-90a2-1ae1786dbea2","Type":"ContainerDied","Data":"4039b0524567620b8df758af492d5c9757a0f4df02040848b968c2078da15855"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.124057 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4039b0524567620b8df758af492d5c9757a0f4df02040848b968c2078da15855" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.125863 4900 generic.go:334] "Generic (PLEG): container finished" podID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerID="863da9d070c65e09dd0916cf6ca8bf7a03e08fb91cbe309cd740bc4a5f3a11aa" exitCode=0 Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.125905 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8a322a0-752b-4ab1-9418-41c4747eebee","Type":"ContainerDied","Data":"863da9d070c65e09dd0916cf6ca8bf7a03e08fb91cbe309cd740bc4a5f3a11aa"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.127930 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5624f474-dd54-4580-b816-f238cc733b5a","Type":"ContainerDied","Data":"2b959af286e31deabd35f3b4615973d13c65c11a556390fda2c9977139897d1a"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.127998 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.152472 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97e183cf-c0fe-4f94-9c03-7f8fa792c4af","Type":"ContainerDied","Data":"1a4f688d8b636455551170c1dabe6e8bc3e3dee2b38166a2df8d034be44e6bbf"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.152592 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:28.159217 4900 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 02 14:07:28 crc kubenswrapper[4900]: E1202 14:07:28.159278 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data podName:e410de46-b373-431a-8486-21a6f1268e41 nodeName:}" failed. No retries permitted until 2025-12-02 14:07:36.159262878 +0000 UTC m=+1501.575076729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data") pod "rabbitmq-server-0" (UID: "e410de46-b373-431a-8486-21a6f1268e41") : configmap "rabbitmq-config-data" not found Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.161722 4900 scope.go:117] "RemoveContainer" containerID="9d4da9c7aa6120d5ccd058c5a050090e9130cfb769ef31b34092dd5d53ce8475" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.168338 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed8121d7-7b10-44c5-9b43-9088b198f34c","Type":"ContainerDied","Data":"424f04867cd8a3508f4b024310b83812c04c165e5556465186846c4594b9ea2c"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.168371 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424f04867cd8a3508f4b024310b83812c04c165e5556465186846c4594b9ea2c" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.168521 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.170417 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9e062e50-5a22-45c0-adab-9f78980eb851","Type":"ContainerDied","Data":"647aaaa48b9f9742e4f1ad36b654728741cad7f8cc1bfae5cbdb5f7f261f9a51"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.170525 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.175149 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapib12b-account-delete-rl98n" event={"ID":"60ab71b1-8ff6-488c-9401-9b63341b08dd","Type":"ContainerDied","Data":"7baca948b0da00c1542ff63bc396fc96cd03852806ba07d7e41562963c5ba080"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.175226 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapib12b-account-delete-rl98n" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.178395 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" event={"ID":"00825975-35eb-46d6-8aeb-753170564467","Type":"ContainerDied","Data":"c51378899bf00d1d35448264548d97a29edb696e23a0c7d82dc9c0e5f31cabc0"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.178435 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c51378899bf00d1d35448264548d97a29edb696e23a0c7d82dc9c0e5f31cabc0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.191849 4900 generic.go:334] "Generic (PLEG): container finished" podID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerID="f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9" exitCode=0 Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.192042 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab69f1a2-78df-4097-a527-0b90345cdcfe","Type":"ContainerDied","Data":"f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9"} Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.261771 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgktq\" (UniqueName: \"kubernetes.io/projected/60ab71b1-8ff6-488c-9401-9b63341b08dd-kube-api-access-zgktq\") pod \"60ab71b1-8ff6-488c-9401-9b63341b08dd\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.262234 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ab71b1-8ff6-488c-9401-9b63341b08dd-operator-scripts\") pod \"60ab71b1-8ff6-488c-9401-9b63341b08dd\" (UID: \"60ab71b1-8ff6-488c-9401-9b63341b08dd\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.263023 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ab71b1-8ff6-488c-9401-9b63341b08dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ab71b1-8ff6-488c-9401-9b63341b08dd" (UID: "60ab71b1-8ff6-488c-9401-9b63341b08dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.268707 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ab71b1-8ff6-488c-9401-9b63341b08dd-kube-api-access-zgktq" (OuterVolumeSpecName: "kube-api-access-zgktq") pod "60ab71b1-8ff6-488c-9401-9b63341b08dd" (UID: "60ab71b1-8ff6-488c-9401-9b63341b08dd"). InnerVolumeSpecName "kube-api-access-zgktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.314808 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.321531 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.322098 4900 scope.go:117] "RemoveContainer" containerID="440f785e6ac340819ae403625dc734fd43ed1abbd0b52db9080939d07419abce" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.326021 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.328050 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.340740 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.350449 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364327 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jc4z\" (UniqueName: \"kubernetes.io/projected/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-kube-api-access-2jc4z\") pod \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364382 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data-custom\") pod \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364502 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-operator-scripts\") pod \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364551 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-public-tls-certs\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364603 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data\") pod \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364659 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-internal-tls-certs\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364718 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvj92\" (UniqueName: \"kubernetes.io/projected/ed8121d7-7b10-44c5-9b43-9088b198f34c-kube-api-access-cvj92\") pod \"ed8121d7-7b10-44c5-9b43-9088b198f34c\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364754 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-combined-ca-bundle\") pod \"ed8121d7-7b10-44c5-9b43-9088b198f34c\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364781 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-combined-ca-bundle\") pod \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364858 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-config-data\") pod \"ed8121d7-7b10-44c5-9b43-9088b198f34c\" (UID: \"ed8121d7-7b10-44c5-9b43-9088b198f34c\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364907 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj8t4\" (UniqueName: \"kubernetes.io/projected/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-kube-api-access-nj8t4\") pod \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\" (UID: \"e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364937 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-log-httpd\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364963 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkn7z\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-kube-api-access-lkn7z\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.364987 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-etc-swift\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.365011 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-config-data\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.365072 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-logs\") pod \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\" (UID: \"241c5e6f-d993-4c7a-90a2-1ae1786dbea2\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.365100 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-run-httpd\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.365121 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-combined-ca-bundle\") pod \"00825975-35eb-46d6-8aeb-753170564467\" (UID: \"00825975-35eb-46d6-8aeb-753170564467\") " Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.365634 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ab71b1-8ff6-488c-9401-9b63341b08dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.365669 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgktq\" (UniqueName: \"kubernetes.io/projected/60ab71b1-8ff6-488c-9401-9b63341b08dd-kube-api-access-zgktq\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.367694 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" (UID: "e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.368137 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.368369 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.368857 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-logs" (OuterVolumeSpecName: "logs") pod "241c5e6f-d993-4c7a-90a2-1ae1786dbea2" (UID: "241c5e6f-d993-4c7a-90a2-1ae1786dbea2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.369843 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.374404 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-kube-api-access-nj8t4" (OuterVolumeSpecName: "kube-api-access-nj8t4") pod "e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" (UID: "e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c"). InnerVolumeSpecName "kube-api-access-nj8t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.374563 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "241c5e6f-d993-4c7a-90a2-1ae1786dbea2" (UID: "241c5e6f-d993-4c7a-90a2-1ae1786dbea2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.375983 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-kube-api-access-lkn7z" (OuterVolumeSpecName: "kube-api-access-lkn7z") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "kube-api-access-lkn7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.378277 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8121d7-7b10-44c5-9b43-9088b198f34c-kube-api-access-cvj92" (OuterVolumeSpecName: "kube-api-access-cvj92") pod "ed8121d7-7b10-44c5-9b43-9088b198f34c" (UID: "ed8121d7-7b10-44c5-9b43-9088b198f34c"). InnerVolumeSpecName "kube-api-access-cvj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.378901 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.383739 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.392039 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.395793 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-kube-api-access-2jc4z" (OuterVolumeSpecName: "kube-api-access-2jc4z") pod "241c5e6f-d993-4c7a-90a2-1ae1786dbea2" (UID: "241c5e6f-d993-4c7a-90a2-1ae1786dbea2"). InnerVolumeSpecName "kube-api-access-2jc4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.397778 4900 scope.go:117] "RemoveContainer" containerID="780db50c9c1ada3d5ce136afa95e168fc995789ee6f6731c4c9529970d7dfd6e" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.398698 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.416859 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.430220 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.433574 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed8121d7-7b10-44c5-9b43-9088b198f34c" (UID: "ed8121d7-7b10-44c5-9b43-9088b198f34c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.433973 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-config-data" (OuterVolumeSpecName: "config-data") pod "ed8121d7-7b10-44c5-9b43-9088b198f34c" (UID: "ed8121d7-7b10-44c5-9b43-9088b198f34c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.446514 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.453550 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.465536 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "241c5e6f-d993-4c7a-90a2-1ae1786dbea2" (UID: "241c5e6f-d993-4c7a-90a2-1ae1786dbea2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468025 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468052 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj8t4\" (UniqueName: \"kubernetes.io/projected/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-kube-api-access-nj8t4\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468063 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468073 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkn7z\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-kube-api-access-lkn7z\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468082 4900 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00825975-35eb-46d6-8aeb-753170564467-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468091 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468100 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00825975-35eb-46d6-8aeb-753170564467-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468108 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jc4z\" (UniqueName: \"kubernetes.io/projected/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-kube-api-access-2jc4z\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468117 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468125 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468133 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvj92\" (UniqueName: \"kubernetes.io/projected/ed8121d7-7b10-44c5-9b43-9088b198f34c-kube-api-access-cvj92\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468142 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8121d7-7b10-44c5-9b43-9088b198f34c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.468151 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.491279 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.494211 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.495705 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-config-data" (OuterVolumeSpecName: "config-data") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.513524 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data" (OuterVolumeSpecName: "config-data") pod "241c5e6f-d993-4c7a-90a2-1ae1786dbea2" (UID: "241c5e6f-d993-4c7a-90a2-1ae1786dbea2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.514906 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "00825975-35eb-46d6-8aeb-753170564467" (UID: "00825975-35eb-46d6-8aeb-753170564467"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.519375 4900 scope.go:117] "RemoveContainer" containerID="7eb9fd41a54b8a69b7cc6d75b1be55aec13e8d931357769f99ce0bad86542d63" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.539673 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapib12b-account-delete-rl98n"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.546297 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapib12b-account-delete-rl98n"] Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.552600 4900 scope.go:117] "RemoveContainer" containerID="9ad89e0edddd80cb4770a29e75a6ba59954a7129d14ce51b8b71ef393689cab0" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.569617 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.569653 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/241c5e6f-d993-4c7a-90a2-1ae1786dbea2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.569664 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.569674 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.569682 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00825975-35eb-46d6-8aeb-753170564467-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.576899 4900 scope.go:117] "RemoveContainer" containerID="111a5b135a319c9f662db3b3e6a11bfecf64162e38bdb1b58f02ab519892e209" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.598221 4900 scope.go:117] "RemoveContainer" containerID="23b3303be4e59abe74fc9f9832f7a048be4602b7bb3f5d0f5af2d708139fa0ab" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.612687 4900 scope.go:117] "RemoveContainer" containerID="2cfb4aee9c0e60e8a52f829f41a8643aa512a3a323f16d80fe672fa39d031a24" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.629351 4900 scope.go:117] "RemoveContainer" containerID="974bb345b96f61cacf2fe7abb81edc513dd4982f9f5eaf42cece693b6d995322" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.651682 4900 scope.go:117] "RemoveContainer" containerID="8b057749e05312ac8b867b23be07aedbc2a70bbb08e3cd32bd27a1b2582ac140" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.673132 4900 scope.go:117] "RemoveContainer" containerID="9dd52ef17bb2168dda2bf0989c02753d795d4aedc98c658cd7c2b41c233f6f31" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.928545 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" path="/var/lib/kubelet/pods/1a302619-4a69-4e62-b7cb-6812b771f6d4/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.929438 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" path="/var/lib/kubelet/pods/557a84eb-0882-44c1-b4db-7c8a19e1303d/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.930133 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5624f474-dd54-4580-b816-f238cc733b5a" path="/var/lib/kubelet/pods/5624f474-dd54-4580-b816-f238cc733b5a/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.931231 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ab71b1-8ff6-488c-9401-9b63341b08dd" path="/var/lib/kubelet/pods/60ab71b1-8ff6-488c-9401-9b63341b08dd/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.931752 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" path="/var/lib/kubelet/pods/7b0e50c7-752e-4879-a382-ff97500cfd89/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.932281 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" path="/var/lib/kubelet/pods/97e183cf-c0fe-4f94-9c03-7f8fa792c4af/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.933430 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" path="/var/lib/kubelet/pods/9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.934053 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" path="/var/lib/kubelet/pods/9e062e50-5a22-45c0-adab-9f78980eb851/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.935141 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" path="/var/lib/kubelet/pods/ad78a256-27f0-46a9-addb-dbc7b41bebd2/volumes" Dec 02 14:07:28 crc kubenswrapper[4900]: I1202 14:07:28.935610 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49f994e-0d9a-4312-995d-d84d93f31f01" path="/var/lib/kubelet/pods/c49f994e-0d9a-4312-995d-d84d93f31f01/volumes" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.150073 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.180416 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27a1401-3ad1-40a3-9ce6-08cac86fef42-operator-scripts\") pod \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.180550 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xswgl\" (UniqueName: \"kubernetes.io/projected/b27a1401-3ad1-40a3-9ce6-08cac86fef42-kube-api-access-xswgl\") pod \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\" (UID: \"b27a1401-3ad1-40a3-9ce6-08cac86fef42\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.181738 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27a1401-3ad1-40a3-9ce6-08cac86fef42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b27a1401-3ad1-40a3-9ce6-08cac86fef42" (UID: "b27a1401-3ad1-40a3-9ce6-08cac86fef42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.182345 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27a1401-3ad1-40a3-9ce6-08cac86fef42-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.185755 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27a1401-3ad1-40a3-9ce6-08cac86fef42-kube-api-access-xswgl" (OuterVolumeSpecName: "kube-api-access-xswgl") pod "b27a1401-3ad1-40a3-9ce6-08cac86fef42" (UID: "b27a1401-3ad1-40a3-9ce6-08cac86fef42"). InnerVolumeSpecName "kube-api-access-xswgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.212478 4900 generic.go:334] "Generic (PLEG): container finished" podID="8db82600-180c-4114-8006-551e1b566ce5" containerID="c9b48d55f32d54ed9f77fab0b281d7e2bb2a1783f7388f1bec82ef0b685bf983" exitCode=0 Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.212542 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8db82600-180c-4114-8006-551e1b566ce5","Type":"ContainerDied","Data":"c9b48d55f32d54ed9f77fab0b281d7e2bb2a1783f7388f1bec82ef0b685bf983"} Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.221547 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican4c1f-account-delete-ksk64" event={"ID":"b27a1401-3ad1-40a3-9ce6-08cac86fef42","Type":"ContainerDied","Data":"7eeab5cd14274b5f1ba5f97ff7534d790ea20e4f0ea3be993b72b16c6a0c6052"} Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.221585 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eeab5cd14274b5f1ba5f97ff7534d790ea20e4f0ea3be993b72b16c6a0c6052" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.221636 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican4c1f-account-delete-ksk64" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.224195 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9c17cf84-2174-42d8-880a-9a643a161ef4/ovn-northd/0.log" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.224251 4900 generic.go:334] "Generic (PLEG): container finished" podID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerID="9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573" exitCode=139 Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.224329 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c17cf84-2174-42d8-880a-9a643a161ef4","Type":"ContainerDied","Data":"9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573"} Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.226109 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell02641-account-delete-zn9bc" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.226750 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c79b4474d-mx7p9" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.226792 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b7cc7fc75-qdmm5" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.226891 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.287031 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xswgl\" (UniqueName: \"kubernetes.io/projected/b27a1401-3ad1-40a3-9ce6-08cac86fef42-kube-api-access-xswgl\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.355896 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c79b4474d-mx7p9"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.371908 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5c79b4474d-mx7p9"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.402476 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5b7cc7fc75-qdmm5"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.426938 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5b7cc7fc75-qdmm5"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.437537 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell02641-account-delete-zn9bc"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.443491 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell02641-account-delete-zn9bc"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.451381 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.453371 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.771932 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.796378 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9c17cf84-2174-42d8-880a-9a643a161ef4/ovn-northd/0.log" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.796714 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.796886 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-memcached-tls-certs\") pod \"812fa799-d734-4151-b87f-25d638295714\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.796937 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7khcm\" (UniqueName: \"kubernetes.io/projected/812fa799-d734-4151-b87f-25d638295714-kube-api-access-7khcm\") pod \"812fa799-d734-4151-b87f-25d638295714\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.796976 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-kolla-config\") pod \"812fa799-d734-4151-b87f-25d638295714\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.797017 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-config-data\") pod \"812fa799-d734-4151-b87f-25d638295714\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.797032 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-combined-ca-bundle\") pod \"812fa799-d734-4151-b87f-25d638295714\" (UID: \"812fa799-d734-4151-b87f-25d638295714\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.799891 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "812fa799-d734-4151-b87f-25d638295714" (UID: "812fa799-d734-4151-b87f-25d638295714"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.801463 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-config-data" (OuterVolumeSpecName: "config-data") pod "812fa799-d734-4151-b87f-25d638295714" (UID: "812fa799-d734-4151-b87f-25d638295714"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.803142 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812fa799-d734-4151-b87f-25d638295714-kube-api-access-7khcm" (OuterVolumeSpecName: "kube-api-access-7khcm") pod "812fa799-d734-4151-b87f-25d638295714" (UID: "812fa799-d734-4151-b87f-25d638295714"). InnerVolumeSpecName "kube-api-access-7khcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.803912 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.803944 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7khcm\" (UniqueName: \"kubernetes.io/projected/812fa799-d734-4151-b87f-25d638295714-kube-api-access-7khcm\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.803954 4900 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/812fa799-d734-4151-b87f-25d638295714-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: E1202 14:07:29.813455 4900 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 02 14:07:29 crc kubenswrapper[4900]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-02T14:07:22Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 02 14:07:29 crc kubenswrapper[4900]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Dec 02 14:07:29 crc kubenswrapper[4900]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-gn6td" message=< Dec 02 14:07:29 crc kubenswrapper[4900]: Exiting ovn-controller (1) [FAILED] Dec 02 14:07:29 crc kubenswrapper[4900]: Killing ovn-controller (1) [ OK ] Dec 02 14:07:29 crc kubenswrapper[4900]: 2025-12-02T14:07:22Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 02 14:07:29 crc kubenswrapper[4900]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Dec 02 14:07:29 crc kubenswrapper[4900]: > Dec 02 14:07:29 crc kubenswrapper[4900]: E1202 14:07:29.813480 4900 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 02 14:07:29 crc kubenswrapper[4900]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-02T14:07:22Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 02 14:07:29 crc kubenswrapper[4900]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Dec 02 14:07:29 crc kubenswrapper[4900]: > pod="openstack/ovn-controller-gn6td" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" containerID="cri-o://c8588296af791c00c99ca2cc1241929618a4f8fd2a651218322a322c131b0851" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.813514 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-gn6td" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" containerID="cri-o://c8588296af791c00c99ca2cc1241929618a4f8fd2a651218322a322c131b0851" gracePeriod=22 Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.831529 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.882851 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "812fa799-d734-4151-b87f-25d638295714" (UID: "812fa799-d734-4151-b87f-25d638295714"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.908805 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "812fa799-d734-4151-b87f-25d638295714" (UID: "812fa799-d734-4151-b87f-25d638295714"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.909797 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-combined-ca-bundle\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.909860 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.909896 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-rundir\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.909916 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-config-data\") pod \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.909938 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-combined-ca-bundle\") pod \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.909988 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-scripts\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.910016 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-config\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.910129 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwzfl\" (UniqueName: \"kubernetes.io/projected/9c17cf84-2174-42d8-880a-9a643a161ef4-kube-api-access-vwzfl\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.910151 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xzb9\" (UniqueName: \"kubernetes.io/projected/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-kube-api-access-8xzb9\") pod \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\" (UID: \"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.910171 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-metrics-certs-tls-certs\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.910258 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.912119 4900 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.912136 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/812fa799-d734-4151-b87f-25d638295714-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.912145 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.912329 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-scripts" (OuterVolumeSpecName: "scripts") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.913552 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-config" (OuterVolumeSpecName: "config") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.921396 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c17cf84-2174-42d8-880a-9a643a161ef4-kube-api-access-vwzfl" (OuterVolumeSpecName: "kube-api-access-vwzfl") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "kube-api-access-vwzfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.928823 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-kube-api-access-8xzb9" (OuterVolumeSpecName: "kube-api-access-8xzb9") pod "bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" (UID: "bf4fd62f-751c-4ba7-8582-3d953bdc0bf6"). InnerVolumeSpecName "kube-api-access-8xzb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.964901 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.964906 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-config-data" (OuterVolumeSpecName: "config-data") pod "bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" (UID: "bf4fd62f-751c-4ba7-8582-3d953bdc0bf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.969109 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.973375 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:29 crc kubenswrapper[4900]: I1202 14:07:29.982037 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.003193 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.003394 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014350 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-operator-scripts\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014437 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-combined-ca-bundle\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014471 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a322a0-752b-4ab1-9418-41c4747eebee-logs\") pod \"f8a322a0-752b-4ab1-9418-41c4747eebee\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014512 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-config-data\") pod \"f8a322a0-752b-4ab1-9418-41c4747eebee\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014596 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbv2\" (UniqueName: \"kubernetes.io/projected/ab69f1a2-78df-4097-a527-0b90345cdcfe-kube-api-access-vsbv2\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014624 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-internal-tls-certs\") pod \"f8a322a0-752b-4ab1-9418-41c4747eebee\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014715 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-generated\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014764 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-combined-ca-bundle\") pod \"f8a322a0-752b-4ab1-9418-41c4747eebee\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014807 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d121fee-d98a-4dcd-ba07-1d4b2015460d-operator-scripts\") pod \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014863 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jmm6\" (UniqueName: \"kubernetes.io/projected/5d121fee-d98a-4dcd-ba07-1d4b2015460d-kube-api-access-2jmm6\") pod \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\" (UID: \"5d121fee-d98a-4dcd-ba07-1d4b2015460d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014933 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df8a1411-7582-4f42-8b5a-3b97cebd9254-operator-scripts\") pod \"df8a1411-7582-4f42-8b5a-3b97cebd9254\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.014961 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-operator-scripts\") pod \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015042 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h58v\" (UniqueName: \"kubernetes.io/projected/f8a322a0-752b-4ab1-9418-41c4747eebee-kube-api-access-7h58v\") pod \"f8a322a0-752b-4ab1-9418-41c4747eebee\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015117 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-default\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015171 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk7c\" (UniqueName: \"kubernetes.io/projected/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-kube-api-access-rsk7c\") pod \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\" (UID: \"ff7f1bf7-3734-4c0e-afc2-d841cc97a529\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015237 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-public-tls-certs\") pod \"f8a322a0-752b-4ab1-9418-41c4747eebee\" (UID: \"f8a322a0-752b-4ab1-9418-41c4747eebee\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015282 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwr58\" (UniqueName: \"kubernetes.io/projected/df8a1411-7582-4f42-8b5a-3b97cebd9254-kube-api-access-hwr58\") pod \"df8a1411-7582-4f42-8b5a-3b97cebd9254\" (UID: \"df8a1411-7582-4f42-8b5a-3b97cebd9254\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015314 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015340 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-kolla-config\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015371 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-galera-tls-certs\") pod \"ab69f1a2-78df-4097-a527-0b90345cdcfe\" (UID: \"ab69f1a2-78df-4097-a527-0b90345cdcfe\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.015998 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.016018 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.016032 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c17cf84-2174-42d8-880a-9a643a161ef4-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.016043 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwzfl\" (UniqueName: \"kubernetes.io/projected/9c17cf84-2174-42d8-880a-9a643a161ef4-kube-api-access-vwzfl\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.016055 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xzb9\" (UniqueName: \"kubernetes.io/projected/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-kube-api-access-8xzb9\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.016064 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.016343 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8a1411-7582-4f42-8b5a-3b97cebd9254-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df8a1411-7582-4f42-8b5a-3b97cebd9254" (UID: "df8a1411-7582-4f42-8b5a-3b97cebd9254"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.017215 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff7f1bf7-3734-4c0e-afc2-d841cc97a529" (UID: "ff7f1bf7-3734-4c0e-afc2-d841cc97a529"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.018208 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8a322a0-752b-4ab1-9418-41c4747eebee-logs" (OuterVolumeSpecName: "logs") pod "f8a322a0-752b-4ab1-9418-41c4747eebee" (UID: "f8a322a0-752b-4ab1-9418-41c4747eebee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.019890 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.021211 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.022008 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.032033 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8a1411-7582-4f42-8b5a-3b97cebd9254-kube-api-access-hwr58" (OuterVolumeSpecName: "kube-api-access-hwr58") pod "df8a1411-7582-4f42-8b5a-3b97cebd9254" (UID: "df8a1411-7582-4f42-8b5a-3b97cebd9254"). InnerVolumeSpecName "kube-api-access-hwr58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.035363 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.041051 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d121fee-d98a-4dcd-ba07-1d4b2015460d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d121fee-d98a-4dcd-ba07-1d4b2015460d" (UID: "5d121fee-d98a-4dcd-ba07-1d4b2015460d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.043804 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab69f1a2-78df-4097-a527-0b90345cdcfe-kube-api-access-vsbv2" (OuterVolumeSpecName: "kube-api-access-vsbv2") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "kube-api-access-vsbv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.076378 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a322a0-752b-4ab1-9418-41c4747eebee-kube-api-access-7h58v" (OuterVolumeSpecName: "kube-api-access-7h58v") pod "f8a322a0-752b-4ab1-9418-41c4747eebee" (UID: "f8a322a0-752b-4ab1-9418-41c4747eebee"). InnerVolumeSpecName "kube-api-access-7h58v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.076446 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-kube-api-access-rsk7c" (OuterVolumeSpecName: "kube-api-access-rsk7c") pod "ff7f1bf7-3734-4c0e-afc2-d841cc97a529" (UID: "ff7f1bf7-3734-4c0e-afc2-d841cc97a529"). InnerVolumeSpecName "kube-api-access-rsk7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.078535 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.078813 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.082953 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d121fee-d98a-4dcd-ba07-1d4b2015460d-kube-api-access-2jmm6" (OuterVolumeSpecName: "kube-api-access-2jmm6") pod "5d121fee-d98a-4dcd-ba07-1d4b2015460d" (UID: "5d121fee-d98a-4dcd-ba07-1d4b2015460d"). InnerVolumeSpecName "kube-api-access-2jmm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.090547 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.093329 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" (UID: "bf4fd62f-751c-4ba7-8582-3d953bdc0bf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.106731 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117025 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-combined-ca-bundle\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117066 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8db82600-180c-4114-8006-551e1b566ce5-erlang-cookie-secret\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117087 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117114 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-plugins\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117136 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-galera-tls-certs\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117157 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117181 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-operator-scripts\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117202 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-generated\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117224 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117247 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fxjn\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-kube-api-access-9fxjn\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117267 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117286 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-server-conf\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117319 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kolla-config\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117336 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-credential-keys\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117353 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-config-data\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117375 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-confd\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117392 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cstck\" (UniqueName: \"kubernetes.io/projected/d42b962f-20f0-43d1-a1c4-c16c9392ec82-kube-api-access-cstck\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117415 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-plugins-conf\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117437 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-scripts\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117455 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-internal-tls-certs\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117478 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-default\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117687 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117850 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plsww\" (UniqueName: \"kubernetes.io/projected/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kube-api-access-plsww\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117908 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8db82600-180c-4114-8006-551e1b566ce5-pod-info\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.117964 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-tls\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.118013 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-fernet-keys\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.118069 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-public-tls-certs\") pod \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\" (UID: \"d42b962f-20f0-43d1-a1c4-c16c9392ec82\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.118091 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-erlang-cookie\") pod \"8db82600-180c-4114-8006-551e1b566ce5\" (UID: \"8db82600-180c-4114-8006-551e1b566ce5\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119308 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h58v\" (UniqueName: \"kubernetes.io/projected/f8a322a0-752b-4ab1-9418-41c4747eebee-kube-api-access-7h58v\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119328 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119338 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsk7c\" (UniqueName: \"kubernetes.io/projected/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-kube-api-access-rsk7c\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119348 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwr58\" (UniqueName: \"kubernetes.io/projected/df8a1411-7582-4f42-8b5a-3b97cebd9254-kube-api-access-hwr58\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119368 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119376 4900 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119385 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab69f1a2-78df-4097-a527-0b90345cdcfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119393 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a322a0-752b-4ab1-9418-41c4747eebee-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119401 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbv2\" (UniqueName: \"kubernetes.io/projected/ab69f1a2-78df-4097-a527-0b90345cdcfe-kube-api-access-vsbv2\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119410 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119418 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab69f1a2-78df-4097-a527-0b90345cdcfe-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119426 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d121fee-d98a-4dcd-ba07-1d4b2015460d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119434 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jmm6\" (UniqueName: \"kubernetes.io/projected/5d121fee-d98a-4dcd-ba07-1d4b2015460d-kube-api-access-2jmm6\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119442 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119449 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df8a1411-7582-4f42-8b5a-3b97cebd9254-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.119458 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff7f1bf7-3734-4c0e-afc2-d841cc97a529-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.127256 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.133201 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.133701 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.134043 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.134164 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-kube-api-access-9fxjn" (OuterVolumeSpecName: "kube-api-access-9fxjn") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "kube-api-access-9fxjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.134393 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.139238 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.139708 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.139819 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8a322a0-752b-4ab1-9418-41c4747eebee" (UID: "f8a322a0-752b-4ab1-9418-41c4747eebee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.140973 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.141992 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8db82600-180c-4114-8006-551e1b566ce5-pod-info" (OuterVolumeSpecName: "pod-info") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.142048 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-scripts" (OuterVolumeSpecName: "scripts") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.143183 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.143585 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db82600-180c-4114-8006-551e1b566ce5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.144448 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.145132 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42b962f-20f0-43d1-a1c4-c16c9392ec82-kube-api-access-cstck" (OuterVolumeSpecName: "kube-api-access-cstck") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "kube-api-access-cstck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.159978 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kube-api-access-plsww" (OuterVolumeSpecName: "kube-api-access-plsww") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "kube-api-access-plsww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.183259 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.186327 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-config-data" (OuterVolumeSpecName: "config-data") pod "f8a322a0-752b-4ab1-9418-41c4747eebee" (UID: "f8a322a0-752b-4ab1-9418-41c4747eebee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.189978 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.205766 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.223955 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224012 4900 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8db82600-180c-4114-8006-551e1b566ce5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224063 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224076 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224087 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224103 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224112 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fxjn\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-kube-api-access-9fxjn\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224140 4900 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224149 4900 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224161 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cstck\" (UniqueName: \"kubernetes.io/projected/d42b962f-20f0-43d1-a1c4-c16c9392ec82-kube-api-access-cstck\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224188 4900 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224215 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224228 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224239 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224248 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plsww\" (UniqueName: \"kubernetes.io/projected/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-kube-api-access-plsww\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224258 4900 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8db82600-180c-4114-8006-551e1b566ce5-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224267 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224294 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224304 4900 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.224314 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.246148 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.271103 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.276424 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-config-data" (OuterVolumeSpecName: "config-data") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.278016 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf4fd62f-751c-4ba7-8582-3d953bdc0bf6","Type":"ContainerDied","Data":"94e930bae7a99946e0d51ef522ed9cfd677947ca6133a0074cf61fd4e4b9a035"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.278061 4900 scope.go:117] "RemoveContainer" containerID="7a015b969677f8a38ffbf9b4e7f89474014d3449b484894c6c2a8469cb1a3e61" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.278070 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.285011 4900 generic.go:334] "Generic (PLEG): container finished" podID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerID="e987a3eb684f39a5280336cf0f24e6d52b9537943b4ae3c78e42d02eaacbba03" exitCode=0 Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.285083 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ff8b8d959-29bd8" event={"ID":"cb2b5602-0b26-4de1-ac2c-3606bd0aede3","Type":"ContainerDied","Data":"e987a3eb684f39a5280336cf0f24e6d52b9537943b4ae3c78e42d02eaacbba03"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.288201 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5ba5-account-delete-kd6ml" event={"ID":"ff7f1bf7-3734-4c0e-afc2-d841cc97a529","Type":"ContainerDied","Data":"aa62552ca88d9f76b70cc375eeb3f6280df5712e210d444d514bc2858fb9468c"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.288252 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa62552ca88d9f76b70cc375eeb3f6280df5712e210d444d514bc2858fb9468c" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.288220 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5ba5-account-delete-kd6ml" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.290469 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance0080-account-delete-fz4nc" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.290469 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance0080-account-delete-fz4nc" event={"ID":"df8a1411-7582-4f42-8b5a-3b97cebd9254","Type":"ContainerDied","Data":"64e04d45d4f74716cc37ffb54b1ad5df6dfb0a76eca3363575fddff519fa0daf"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.290722 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64e04d45d4f74716cc37ffb54b1ad5df6dfb0a76eca3363575fddff519fa0daf" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.293514 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8a322a0-752b-4ab1-9418-41c4747eebee","Type":"ContainerDied","Data":"f4d9ad1aefdbffb4fb8d7675c24e88f9e4d1ea02aca5329e6567e8d369219207"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.293551 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.295758 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.295847 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab69f1a2-78df-4097-a527-0b90345cdcfe","Type":"ContainerDied","Data":"d35c27e6bae1249db1889e71323773e48e3ea042690f40f008866b12fe4d35ff"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.297779 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6595dffb96-c4mrx" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.297713 4900 generic.go:334] "Generic (PLEG): container finished" podID="d42b962f-20f0-43d1-a1c4-c16c9392ec82" containerID="e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd" exitCode=0 Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.297786 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6595dffb96-c4mrx" event={"ID":"d42b962f-20f0-43d1-a1c4-c16c9392ec82","Type":"ContainerDied","Data":"e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.297883 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6595dffb96-c4mrx" event={"ID":"d42b962f-20f0-43d1-a1c4-c16c9392ec82","Type":"ContainerDied","Data":"e99d2517fece5e3cd9d9868f45aa11e813c3ef98d6874f4e763b1a8d7eb21800"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.301123 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9c17cf84-2174-42d8-880a-9a643a161ef4/ovn-northd/0.log" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.301310 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.301500 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c17cf84-2174-42d8-880a-9a643a161ef4","Type":"ContainerDied","Data":"0af10bb990a0a635a2e90374c0fdc9fc55b31873db85dd5fee1fdb78f8a57303"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.305940 4900 generic.go:334] "Generic (PLEG): container finished" podID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" containerID="510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb" exitCode=0 Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.305994 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"784ffd24-69a7-4235-9d4d-4a1be6f183fd","Type":"ContainerDied","Data":"510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.307887 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8db82600-180c-4114-8006-551e1b566ce5","Type":"ContainerDied","Data":"e3f54f155d443d2b46db54a399c112d35a27629a2c6c43acd873b59741850247"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.307953 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.312126 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"812fa799-d734-4151-b87f-25d638295714","Type":"ContainerDied","Data":"8ddeb14ea9d43f8ce1a42ac22c9b7fe2e897d371a21d098cd1ac4669370682eb"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.312217 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.314902 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gn6td_09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d/ovn-controller/0.log" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.314937 4900 generic.go:334] "Generic (PLEG): container finished" podID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerID="c8588296af791c00c99ca2cc1241929618a4f8fd2a651218322a322c131b0851" exitCode=139 Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.314988 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td" event={"ID":"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d","Type":"ContainerDied","Data":"c8588296af791c00c99ca2cc1241929618a4f8fd2a651218322a322c131b0851"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.316545 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.317310 4900 generic.go:334] "Generic (PLEG): container finished" podID="533da492-8f1f-4593-86bd-8d5b316bb897" containerID="75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f" exitCode=0 Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.317349 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerDied","Data":"75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.317365 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"533da492-8f1f-4593-86bd-8d5b316bb897","Type":"ContainerDied","Data":"46b5616a1841004184c412c62464ad5d24509666d9d0e701673c2c05b0c2f916"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.317419 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.324919 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa999-account-delete-25gdq" event={"ID":"5d121fee-d98a-4dcd-ba07-1d4b2015460d","Type":"ContainerDied","Data":"b42c2792f703fe7c00954295ae0cb411363acfaab7617c86614e6e1e9c9c4125"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.324961 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42c2792f703fe7c00954295ae0cb411363acfaab7617c86614e6e1e9c9c4125" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.325036 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementa999-account-delete-25gdq" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.340399 4900 generic.go:334] "Generic (PLEG): container finished" podID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerID="30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9" exitCode=0 Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.340617 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79","Type":"ContainerDied","Data":"30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.340924 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79","Type":"ContainerDied","Data":"3597d0f3b8bd621961cfd6662c855263bfc068c2011d5e3725f624ab2a68e5bf"} Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.341028 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.360592 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-879t5"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.366357 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-879t5"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.376824 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqcm"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.382884 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4c1f-account-create-update-s546k"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.383551 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.384175 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-scripts\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.384219 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-combined-ca-bundle\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.385030 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.384259 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-config-data\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.385953 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-sg-core-conf-yaml\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.385987 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs\") pod \"9c17cf84-2174-42d8-880a-9a643a161ef4\" (UID: \"9c17cf84-2174-42d8-880a-9a643a161ef4\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386011 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-log-httpd\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386079 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26tg6\" (UniqueName: \"kubernetes.io/projected/533da492-8f1f-4593-86bd-8d5b316bb897-kube-api-access-26tg6\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386111 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-ceilometer-tls-certs\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386134 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle\") pod \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\" (UID: \"f2fc5f74-3f4c-4988-aa1c-c2dd50aade79\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386193 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-run-httpd\") pod \"533da492-8f1f-4593-86bd-8d5b316bb897\" (UID: \"533da492-8f1f-4593-86bd-8d5b316bb897\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386728 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386743 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386754 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.386766 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.387101 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.388774 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican4c1f-account-delete-ksk64"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.390769 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: W1202 14:07:30.392054 4900 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9c17cf84-2174-42d8-880a-9a643a161ef4/volumes/kubernetes.io~secret/ovn-northd-tls-certs Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.392093 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: W1202 14:07:30.392188 4900 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79/volumes/kubernetes.io~secret/combined-ca-bundle Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.392198 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.393056 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican4c1f-account-delete-ksk64"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.398715 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4c1f-account-create-update-s546k"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.399441 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-scripts" (OuterVolumeSpecName: "scripts") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.406334 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533da492-8f1f-4593-86bd-8d5b316bb897-kube-api-access-26tg6" (OuterVolumeSpecName: "kube-api-access-26tg6") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "kube-api-access-26tg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.431450 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data" (OuterVolumeSpecName: "config-data") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.453722 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488342 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488531 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488585 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488692 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488750 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488803 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/533da492-8f1f-4593-86bd-8d5b316bb897-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488856 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26tg6\" (UniqueName: \"kubernetes.io/projected/533da492-8f1f-4593-86bd-8d5b316bb897-kube-api-access-26tg6\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.488916 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.497167 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ab69f1a2-78df-4097-a527-0b90345cdcfe" (UID: "ab69f1a2-78df-4097-a527-0b90345cdcfe"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.526093 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lwsqf"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.542087 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f8a322a0-752b-4ab1-9418-41c4747eebee" (UID: "f8a322a0-752b-4ab1-9418-41c4747eebee"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.556402 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lwsqf"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.582946 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a999-account-create-update-h878s"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.585538 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f8a322a0-752b-4ab1-9418-41c4747eebee" (UID: "f8a322a0-752b-4ab1-9418-41c4747eebee"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.594106 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.601954 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.602194 4900 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab69f1a2-78df-4097-a527-0b90345cdcfe-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.602208 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a322a0-752b-4ab1-9418-41c4747eebee-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.602219 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.605883 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a999-account-create-update-h878s"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.611257 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.615817 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementa999-account-delete-25gdq"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.620506 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" (UID: "f2fc5f74-3f4c-4988-aa1c-c2dd50aade79"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.625329 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9c17cf84-2174-42d8-880a-9a643a161ef4" (UID: "9c17cf84-2174-42d8-880a-9a643a161ef4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.626699 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementa999-account-delete-25gdq"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.627385 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.627904 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d42b962f-20f0-43d1-a1c4-c16c9392ec82" (UID: "d42b962f-20f0-43d1-a1c4-c16c9392ec82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.635255 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-server-conf" (OuterVolumeSpecName: "server-conf") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.646763 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.657549 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mg867"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.689122 4900 scope.go:117] "RemoveContainer" containerID="863da9d070c65e09dd0916cf6ca8bf7a03e08fb91cbe309cd740bc4a5f3a11aa" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.693983 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mg867"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.697149 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8db82600-180c-4114-8006-551e1b566ce5" (UID: "8db82600-180c-4114-8006-551e1b566ce5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703487 4900 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703513 4900 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8db82600-180c-4114-8006-551e1b566ce5-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703524 4900 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c17cf84-2174-42d8-880a-9a643a161ef4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703534 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8db82600-180c-4114-8006-551e1b566ce5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703543 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42b962f-20f0-43d1-a1c4-c16c9392ec82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703552 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703559 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.703567 4900 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.714675 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0080-account-create-update-7t6ck"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.721107 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-config-data" (OuterVolumeSpecName: "config-data") pod "533da492-8f1f-4593-86bd-8d5b316bb897" (UID: "533da492-8f1f-4593-86bd-8d5b316bb897"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.721858 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance0080-account-delete-fz4nc"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.730557 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0080-account-create-update-7t6ck"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.736105 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.738654 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance0080-account-delete-fz4nc"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.746527 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4zn4n"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.752087 4900 scope.go:117] "RemoveContainer" containerID="81cc101728f5de7b2d97ff757885b53c0f26120c2800d3265c49f79eefd4fe58" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.759029 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4zn4n"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.778631 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5ba5-account-create-update-hqqxz"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.787897 4900 scope.go:117] "RemoveContainer" containerID="f47b6bd8993d686e43d5eedb56b1a8fb8563b96b97615d977580ed1e305ad9b9" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.788220 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5ba5-account-create-update-hqqxz"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.790061 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gn6td_09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d/ovn-controller/0.log" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.790121 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.802097 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder5ba5-account-delete-kd6ml"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.804863 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data\") pod \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.804910 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-combined-ca-bundle\") pod \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.805093 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slkvt\" (UniqueName: \"kubernetes.io/projected/784ffd24-69a7-4235-9d4d-4a1be6f183fd-kube-api-access-slkvt\") pod \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.806196 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533da492-8f1f-4593-86bd-8d5b316bb897-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.807954 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder5ba5-account-delete-kd6ml"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.829046 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lskk6"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.838939 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ffd24-69a7-4235-9d4d-4a1be6f183fd-kube-api-access-slkvt" (OuterVolumeSpecName: "kube-api-access-slkvt") pod "784ffd24-69a7-4235-9d4d-4a1be6f183fd" (UID: "784ffd24-69a7-4235-9d4d-4a1be6f183fd"). InnerVolumeSpecName "kube-api-access-slkvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.840266 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lskk6"] Dec 02 14:07:30 crc kubenswrapper[4900]: E1202 14:07:30.845811 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data podName:784ffd24-69a7-4235-9d4d-4a1be6f183fd nodeName:}" failed. No retries permitted until 2025-12-02 14:07:31.345779672 +0000 UTC m=+1496.761593603 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data") pod "784ffd24-69a7-4235-9d4d-4a1be6f183fd" (UID: "784ffd24-69a7-4235-9d4d-4a1be6f183fd") : error deleting /var/lib/kubelet/pods/784ffd24-69a7-4235-9d4d-4a1be6f183fd/volume-subpaths: remove /var/lib/kubelet/pods/784ffd24-69a7-4235-9d4d-4a1be6f183fd/volume-subpaths: no such file or directory Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.846148 4900 scope.go:117] "RemoveContainer" containerID="811c411eac126ccd19ce31de3b8fe1f7fbde7d7f6f765332d7fb6220e74d4d85" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.848797 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1777-account-create-update-h5cks"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.849501 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "784ffd24-69a7-4235-9d4d-4a1be6f183fd" (UID: "784ffd24-69a7-4235-9d4d-4a1be6f183fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.849703 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.858693 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1777-account-create-update-h5cks"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.866906 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron1777-account-delete-mh9jf"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.870849 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron1777-account-delete-mh9jf"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.880398 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.884051 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.887747 4900 scope.go:117] "RemoveContainer" containerID="e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.894098 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.901907 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907420 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-scripts\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907469 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndpk2\" (UniqueName: \"kubernetes.io/projected/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-kube-api-access-ndpk2\") pod \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907509 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run-ovn\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907548 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-ovn-controller-tls-certs\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907617 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data-custom\") pod \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907699 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-combined-ca-bundle\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907719 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907765 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-logs\") pod \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907813 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-combined-ca-bundle\") pod \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907840 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl9gj\" (UniqueName: \"kubernetes.io/projected/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-kube-api-access-pl9gj\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907876 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data\") pod \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\" (UID: \"cb2b5602-0b26-4de1-ac2c-3606bd0aede3\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.907921 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-log-ovn\") pod \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\" (UID: \"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d\") " Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.908343 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slkvt\" (UniqueName: \"kubernetes.io/projected/784ffd24-69a7-4235-9d4d-4a1be6f183fd-kube-api-access-slkvt\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.908364 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.909122 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.909122 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run" (OuterVolumeSpecName: "var-run") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.909815 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.910279 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-logs" (OuterVolumeSpecName: "logs") pod "cb2b5602-0b26-4de1-ac2c-3606bd0aede3" (UID: "cb2b5602-0b26-4de1-ac2c-3606bd0aede3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.912734 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-scripts" (OuterVolumeSpecName: "scripts") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.918337 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cb2b5602-0b26-4de1-ac2c-3606bd0aede3" (UID: "cb2b5602-0b26-4de1-ac2c-3606bd0aede3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.924457 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-kube-api-access-pl9gj" (OuterVolumeSpecName: "kube-api-access-pl9gj") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "kube-api-access-pl9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.924685 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-kube-api-access-ndpk2" (OuterVolumeSpecName: "kube-api-access-ndpk2") pod "cb2b5602-0b26-4de1-ac2c-3606bd0aede3" (UID: "cb2b5602-0b26-4de1-ac2c-3606bd0aede3"). InnerVolumeSpecName "kube-api-access-ndpk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.947203 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00825975-35eb-46d6-8aeb-753170564467" path="/var/lib/kubelet/pods/00825975-35eb-46d6-8aeb-753170564467/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.947768 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213ca91f-e63f-4f0e-a161-57f4cb101c0f" path="/var/lib/kubelet/pods/213ca91f-e63f-4f0e-a161-57f4cb101c0f/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.948324 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" path="/var/lib/kubelet/pods/241c5e6f-d993-4c7a-90a2-1ae1786dbea2/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.949339 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ede90d7-42d3-40d8-aebd-f8be400d967c" path="/var/lib/kubelet/pods/3ede90d7-42d3-40d8-aebd-f8be400d967c/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.949995 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509f62ef-848d-46b5-8272-1e94429353cb" path="/var/lib/kubelet/pods/509f62ef-848d-46b5-8272-1e94429353cb/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.950546 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a5b4c0-6620-4f5a-a17a-bc792435afac" path="/var/lib/kubelet/pods/58a5b4c0-6620-4f5a-a17a-bc792435afac/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.951452 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d121fee-d98a-4dcd-ba07-1d4b2015460d" path="/var/lib/kubelet/pods/5d121fee-d98a-4dcd-ba07-1d4b2015460d/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.952038 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69fae44f-0fc7-41e6-9e73-316ac2e88e40" path="/var/lib/kubelet/pods/69fae44f-0fc7-41e6-9e73-316ac2e88e40/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.952765 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809485b0-485a-437a-93f3-432499b8e2c5" path="/var/lib/kubelet/pods/809485b0-485a-437a-93f3-432499b8e2c5/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.953393 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c322124-103a-40d2-a429-f018076f88ff" path="/var/lib/kubelet/pods/8c322124-103a-40d2-a429-f018076f88ff/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.954352 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0fc853-4474-46e7-8669-5c132f629baf" path="/var/lib/kubelet/pods/8e0fc853-4474-46e7-8669-5c132f629baf/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.954897 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9989e23c-35ec-4efa-9660-0ad9574db896" path="/var/lib/kubelet/pods/9989e23c-35ec-4efa-9660-0ad9574db896/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.955512 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb303d7-3e16-4b92-b0b0-d0ce4b6ca729" path="/var/lib/kubelet/pods/afb303d7-3e16-4b92-b0b0-d0ce4b6ca729/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.956463 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c9e2e8-fe2e-45b4-b7ad-5c574139db29" path="/var/lib/kubelet/pods/b0c9e2e8-fe2e-45b4-b7ad-5c574139db29/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.956853 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb2b5602-0b26-4de1-ac2c-3606bd0aede3" (UID: "cb2b5602-0b26-4de1-ac2c-3606bd0aede3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.956985 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27a1401-3ad1-40a3-9ce6-08cac86fef42" path="/var/lib/kubelet/pods/b27a1401-3ad1-40a3-9ce6-08cac86fef42/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.957415 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8a1411-7582-4f42-8b5a-3b97cebd9254" path="/var/lib/kubelet/pods/df8a1411-7582-4f42-8b5a-3b97cebd9254/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.957938 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" path="/var/lib/kubelet/pods/e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.958817 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8121d7-7b10-44c5-9b43-9088b198f34c" path="/var/lib/kubelet/pods/ed8121d7-7b10-44c5-9b43-9088b198f34c/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.959305 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" path="/var/lib/kubelet/pods/f8a322a0-752b-4ab1-9418-41c4747eebee/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.959816 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7f1bf7-3734-4c0e-afc2-d841cc97a529" path="/var/lib/kubelet/pods/ff7f1bf7-3734-4c0e-afc2-d841cc97a529/volumes" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.977097 4900 scope.go:117] "RemoveContainer" containerID="e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.977292 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data" (OuterVolumeSpecName: "config-data") pod "cb2b5602-0b26-4de1-ac2c-3606bd0aede3" (UID: "cb2b5602-0b26-4de1-ac2c-3606bd0aede3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: E1202 14:07:30.977902 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd\": container with ID starting with e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd not found: ID does not exist" containerID="e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.977927 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd"} err="failed to get container status \"e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd\": rpc error: code = NotFound desc = could not find container \"e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd\": container with ID starting with e6e373f4c6cb684b066e8cc5bccbedac182bdd63ca40fadf45b5eebb0bd20bcd not found: ID does not exist" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.977948 4900 scope.go:117] "RemoveContainer" containerID="21b9a43c02558258bc5549999999ff72f00f4644cbc3a254387cc0fa7154a8d5" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.987194 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991881 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991913 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991926 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991934 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991944 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991953 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 02 14:07:30 crc kubenswrapper[4900]: I1202 14:07:30.991962 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.016045 4900 scope.go:117] "RemoveContainer" containerID="9b7c3327a1cb841f7805b58f727c06e1d6143291f5866de8942d0948d6568573" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020469 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-plugins\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020615 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-plugins-conf\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020681 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg6dg\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-kube-api-access-lg6dg\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020740 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-server-conf\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020822 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-erlang-cookie\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020941 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-tls\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.020991 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e410de46-b373-431a-8486-21a6f1268e41-pod-info\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021018 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-confd\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021109 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e410de46-b373-431a-8486-21a6f1268e41-erlang-cookie-secret\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021136 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021178 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e410de46-b373-431a-8486-21a6f1268e41\" (UID: \"e410de46-b373-431a-8486-21a6f1268e41\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021703 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021816 4900 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021827 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021838 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021851 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl9gj\" (UniqueName: \"kubernetes.io/projected/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-kube-api-access-pl9gj\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021900 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021862 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021974 4900 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021986 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.021995 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndpk2\" (UniqueName: \"kubernetes.io/projected/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-kube-api-access-ndpk2\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.022003 4900 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.022012 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2b5602-0b26-4de1-ac2c-3606bd0aede3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.022679 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.024576 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.028779 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e410de46-b373-431a-8486-21a6f1268e41-pod-info" (OuterVolumeSpecName: "pod-info") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.030997 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.033205 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-kube-api-access-lg6dg" (OuterVolumeSpecName: "kube-api-access-lg6dg") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "kube-api-access-lg6dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.034224 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.035369 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e410de46-b373-431a-8486-21a6f1268e41-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.047389 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.070139 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data" (OuterVolumeSpecName: "config-data") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.089224 4900 scope.go:117] "RemoveContainer" containerID="c9b48d55f32d54ed9f77fab0b281d7e2bb2a1783f7388f1bec82ef0b685bf983" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124118 4900 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e410de46-b373-431a-8486-21a6f1268e41-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124147 4900 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e410de46-b373-431a-8486-21a6f1268e41-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124160 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124194 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124207 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124216 4900 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124225 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg6dg\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-kube-api-access-lg6dg\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124233 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.124243 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.126826 4900 scope.go:117] "RemoveContainer" containerID="1edf53c496618c33923bb60078803c42df40f981136a82e37805dfe6b475de7b" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.134268 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.170562 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.177431 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-server-conf" (OuterVolumeSpecName: "server-conf") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.187796 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6595dffb96-c4mrx"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.200844 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e410de46-b373-431a-8486-21a6f1268e41" (UID: "e410de46-b373-431a-8486-21a6f1268e41"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.185099 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" (UID: "09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.208610 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6595dffb96-c4mrx"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.229465 4900 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e410de46-b373-431a-8486-21a6f1268e41-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.229506 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.229517 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e410de46-b373-431a-8486-21a6f1268e41-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.229525 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.234701 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.237326 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.258692 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.268964 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.309550 4900 scope.go:117] "RemoveContainer" containerID="32a45c77a6ef2f6050d3ead873d9d3fa8d6013c262c2306ca091361bfe251ace" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.336973 4900 scope.go:117] "RemoveContainer" containerID="5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.350069 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gn6td_09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d/ovn-controller/0.log" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.350152 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gn6td" event={"ID":"09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d","Type":"ContainerDied","Data":"ad5f539e71ab05eda14aa7b310a218ef88791d9bc44443f800451dc55c9259d7"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.350248 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gn6td" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.353817 4900 generic.go:334] "Generic (PLEG): container finished" podID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerID="38fe3996b8fc9746a9b766644272a3c9c0a2340267110541a84e74475fb110b6" exitCode=0 Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.353880 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqcm" event={"ID":"4d1c4eca-52c1-4143-832e-b377e4415feb","Type":"ContainerDied","Data":"38fe3996b8fc9746a9b766644272a3c9c0a2340267110541a84e74475fb110b6"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.353903 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqcm" event={"ID":"4d1c4eca-52c1-4143-832e-b377e4415feb","Type":"ContainerStarted","Data":"85fb84cf0eb6c56ce31072e1cf5434eb9709a5614c1fb107a2513b25779083b7"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.360358 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.360371 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"784ffd24-69a7-4235-9d4d-4a1be6f183fd","Type":"ContainerDied","Data":"093fa19a335b0f733595da6c62f5afca9b7b49c8443e9283e1ae31c2c4299a47"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.362730 4900 scope.go:117] "RemoveContainer" containerID="2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.363719 4900 generic.go:334] "Generic (PLEG): container finished" podID="e410de46-b373-431a-8486-21a6f1268e41" containerID="983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403" exitCode=0 Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.363779 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e410de46-b373-431a-8486-21a6f1268e41","Type":"ContainerDied","Data":"983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.363849 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e410de46-b373-431a-8486-21a6f1268e41","Type":"ContainerDied","Data":"7c84e19d11ab25122a8e6f3036114e2f5d4f980a08ce5085085819da5d92664f"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.363796 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.372705 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ff8b8d959-29bd8" event={"ID":"cb2b5602-0b26-4de1-ac2c-3606bd0aede3","Type":"ContainerDied","Data":"fd2ffbe1c70f7a3ce44f2e575210fbcde3ab26cf8376a0f31267627b2e95894c"} Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.373045 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ff8b8d959-29bd8" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.395898 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gn6td"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.397749 4900 scope.go:117] "RemoveContainer" containerID="75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.405710 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gn6td"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.416695 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.421997 4900 scope.go:117] "RemoveContainer" containerID="8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.427121 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.433038 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data\") pod \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\" (UID: \"784ffd24-69a7-4235-9d4d-4a1be6f183fd\") " Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.433955 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-ff8b8d959-29bd8"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.437270 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data" (OuterVolumeSpecName: "config-data") pod "784ffd24-69a7-4235-9d4d-4a1be6f183fd" (UID: "784ffd24-69a7-4235-9d4d-4a1be6f183fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.439245 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-ff8b8d959-29bd8"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.444434 4900 scope.go:117] "RemoveContainer" containerID="5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.444745 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8\": container with ID starting with 5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8 not found: ID does not exist" containerID="5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.444772 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8"} err="failed to get container status \"5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8\": rpc error: code = NotFound desc = could not find container \"5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8\": container with ID starting with 5ad7eaffd420e80df7393f50b3ed7f2f7ec7727c819677d81b8bf8c9fbf9b5e8 not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.444794 4900 scope.go:117] "RemoveContainer" containerID="2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.445131 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4\": container with ID starting with 2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4 not found: ID does not exist" containerID="2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.445152 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4"} err="failed to get container status \"2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4\": rpc error: code = NotFound desc = could not find container \"2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4\": container with ID starting with 2f40f3f3c4802f2906ccf1015b28c9cfdbea54f8784fbb1815b58da028ffa2a4 not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.445165 4900 scope.go:117] "RemoveContainer" containerID="75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.445588 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f\": container with ID starting with 75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f not found: ID does not exist" containerID="75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.445608 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f"} err="failed to get container status \"75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f\": rpc error: code = NotFound desc = could not find container \"75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f\": container with ID starting with 75a8db0a81abd17f90bc7f6361eb752757d831f2e1fe944662ceb6076042d99f not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.445620 4900 scope.go:117] "RemoveContainer" containerID="8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.445852 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a\": container with ID starting with 8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a not found: ID does not exist" containerID="8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.445870 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a"} err="failed to get container status \"8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a\": rpc error: code = NotFound desc = could not find container \"8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a\": container with ID starting with 8313c8455087fc55f00b7be54af858574df564294555cbfa85b5eb7796341d3a not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.445884 4900 scope.go:117] "RemoveContainer" containerID="30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.474361 4900 scope.go:117] "RemoveContainer" containerID="8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.505020 4900 scope.go:117] "RemoveContainer" containerID="30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.505383 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9\": container with ID starting with 30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9 not found: ID does not exist" containerID="30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.505405 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9"} err="failed to get container status \"30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9\": rpc error: code = NotFound desc = could not find container \"30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9\": container with ID starting with 30fbca74c00a69be888646335ed76831ed716f836b10bafdebca973ff80847c9 not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.505424 4900 scope.go:117] "RemoveContainer" containerID="8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.505937 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed\": container with ID starting with 8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed not found: ID does not exist" containerID="8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.505956 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed"} err="failed to get container status \"8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed\": rpc error: code = NotFound desc = could not find container \"8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed\": container with ID starting with 8e6f5f22015ce88503570a644c4509f62dac37e245312df6a0b4f97ece5a07ed not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.505969 4900 scope.go:117] "RemoveContainer" containerID="c8588296af791c00c99ca2cc1241929618a4f8fd2a651218322a322c131b0851" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.524872 4900 scope.go:117] "RemoveContainer" containerID="510d10432ff195659ecc944eebf232f1acb2bf5b53e5bcf0ad3e9a2ab2d1a6fb" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.535021 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784ffd24-69a7-4235-9d4d-4a1be6f183fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.549025 4900 scope.go:117] "RemoveContainer" containerID="983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.572191 4900 scope.go:117] "RemoveContainer" containerID="5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.608244 4900 scope.go:117] "RemoveContainer" containerID="983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.608889 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403\": container with ID starting with 983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403 not found: ID does not exist" containerID="983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.608919 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403"} err="failed to get container status \"983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403\": rpc error: code = NotFound desc = could not find container \"983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403\": container with ID starting with 983c103737171c04c79eaea5edb2909db9feacece6eeb2b54a61b4ed472a1403 not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.608942 4900 scope.go:117] "RemoveContainer" containerID="5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168" Dec 02 14:07:31 crc kubenswrapper[4900]: E1202 14:07:31.609171 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168\": container with ID starting with 5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168 not found: ID does not exist" containerID="5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.609194 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168"} err="failed to get container status \"5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168\": rpc error: code = NotFound desc = could not find container \"5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168\": container with ID starting with 5d3212a30fa4f7c614d4f98ab2cb828adaca94e2e1d858d2818da0a2f2eb6168 not found: ID does not exist" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.609208 4900 scope.go:117] "RemoveContainer" containerID="e987a3eb684f39a5280336cf0f24e6d52b9537943b4ae3c78e42d02eaacbba03" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.636859 4900 scope.go:117] "RemoveContainer" containerID="58ddf684d77381c4c34646d9e1659713b257f24bcc73342818f13e7ed7d7268f" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.695859 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.700250 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.876328 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 14:07:31 crc kubenswrapper[4900]: I1202 14:07:31.876421 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.688622 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.689759 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.689796 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.692452 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.692500 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.693430 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.697214 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.697305 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:07:32 crc kubenswrapper[4900]: E1202 14:07:32.815978 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb85e71bb_2c08_4821_adf8_5ab6786c5c9b.slice/crio-conmon-bedbc200d19f936535d94fd6432a16a677ed86493cdb3389497b79552338f7db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d1c4eca_52c1_4143_832e_b377e4415feb.slice/crio-0d4962ac1c137968a6d0fae98020211c840caaced554851bda72133c7c4f59b4.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.921178 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" path="/var/lib/kubelet/pods/09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.923460 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" path="/var/lib/kubelet/pods/533da492-8f1f-4593-86bd-8d5b316bb897/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.925049 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" path="/var/lib/kubelet/pods/784ffd24-69a7-4235-9d4d-4a1be6f183fd/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.927103 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812fa799-d734-4151-b87f-25d638295714" path="/var/lib/kubelet/pods/812fa799-d734-4151-b87f-25d638295714/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.928570 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db82600-180c-4114-8006-551e1b566ce5" path="/var/lib/kubelet/pods/8db82600-180c-4114-8006-551e1b566ce5/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.930894 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" path="/var/lib/kubelet/pods/9c17cf84-2174-42d8-880a-9a643a161ef4/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.932436 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" path="/var/lib/kubelet/pods/ab69f1a2-78df-4097-a527-0b90345cdcfe/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.933809 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" path="/var/lib/kubelet/pods/bf4fd62f-751c-4ba7-8582-3d953bdc0bf6/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.936041 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" path="/var/lib/kubelet/pods/cb2b5602-0b26-4de1-ac2c-3606bd0aede3/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.937278 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42b962f-20f0-43d1-a1c4-c16c9392ec82" path="/var/lib/kubelet/pods/d42b962f-20f0-43d1-a1c4-c16c9392ec82/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.938747 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e410de46-b373-431a-8486-21a6f1268e41" path="/var/lib/kubelet/pods/e410de46-b373-431a-8486-21a6f1268e41/volumes" Dec 02 14:07:32 crc kubenswrapper[4900]: I1202 14:07:32.941274 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" path="/var/lib/kubelet/pods/f2fc5f74-3f4c-4988-aa1c-c2dd50aade79/volumes" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.668837 4900 generic.go:334] "Generic (PLEG): container finished" podID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerID="0d4962ac1c137968a6d0fae98020211c840caaced554851bda72133c7c4f59b4" exitCode=0 Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.668899 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqcm" event={"ID":"4d1c4eca-52c1-4143-832e-b377e4415feb","Type":"ContainerDied","Data":"0d4962ac1c137968a6d0fae98020211c840caaced554851bda72133c7c4f59b4"} Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.762401 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fwqp2"] Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.763400 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.763431 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.763455 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="ovn-northd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.763467 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="ovn-northd" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.763515 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.763536 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765464 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7f1bf7-3734-4c0e-afc2-d841cc97a529" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765488 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7f1bf7-3734-4c0e-afc2-d841cc97a529" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765595 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerName="mysql-bootstrap" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765610 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerName="mysql-bootstrap" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765638 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="setup-container" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765672 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="setup-container" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765712 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765726 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765782 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" containerName="nova-cell0-conductor-conductor" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765799 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" containerName="nova-cell0-conductor-conductor" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765843 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-metadata" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765856 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-metadata" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765869 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d121fee-d98a-4dcd-ba07-1d4b2015460d" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765891 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d121fee-d98a-4dcd-ba07-1d4b2015460d" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765933 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="proxy-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765945 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="proxy-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765957 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="sg-core" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765969 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="sg-core" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.765984 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ab71b1-8ff6-488c-9401-9b63341b08dd" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.765995 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ab71b1-8ff6-488c-9401-9b63341b08dd" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766220 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766238 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766281 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766294 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766309 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766354 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766625 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-server" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766680 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-server" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766697 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="mysql-bootstrap" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766709 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="mysql-bootstrap" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766738 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766752 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.766793 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.766806 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-api" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.767003 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.767066 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.767083 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812fa799-d734-4151-b87f-25d638295714" containerName="memcached" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.767095 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="812fa799-d734-4151-b87f-25d638295714" containerName="memcached" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.767127 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8121d7-7b10-44c5-9b43-9088b198f34c" containerName="nova-scheduler-scheduler" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.767142 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8121d7-7b10-44c5-9b43-9088b198f34c" containerName="nova-scheduler-scheduler" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.767171 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.767183 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.767439 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.767475 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.767974 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="setup-container" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.768000 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="setup-container" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.768084 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="rabbitmq" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.768099 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="rabbitmq" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.768118 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="openstack-network-exporter" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.768131 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="openstack-network-exporter" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.768158 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.768583 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.768806 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-notification-agent" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.768833 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-notification-agent" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.769054 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.769077 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.770206 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.770234 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-api" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.770261 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.770276 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.770465 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-central-agent" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.770484 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-central-agent" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.770763 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771044 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771087 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42b962f-20f0-43d1-a1c4-c16c9392ec82" containerName="keystone-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771101 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42b962f-20f0-43d1-a1c4-c16c9392ec82" containerName="keystone-api" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771124 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771137 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771153 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771172 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771189 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="galera" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771202 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="galera" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771225 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" containerName="kube-state-metrics" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771237 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" containerName="kube-state-metrics" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771268 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c322124-103a-40d2-a429-f018076f88ff" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771279 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c322124-103a-40d2-a429-f018076f88ff" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771304 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771317 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771346 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771359 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771395 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8a1411-7582-4f42-8b5a-3b97cebd9254" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771410 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8a1411-7582-4f42-8b5a-3b97cebd9254" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771453 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771467 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-log" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771488 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="rabbitmq" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771502 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="rabbitmq" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771541 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerName="galera" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771553 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerName="galera" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771595 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" containerName="nova-cell1-conductor-conductor" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771607 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" containerName="nova-cell1-conductor-conductor" Dec 02 14:07:33 crc kubenswrapper[4900]: E1202 14:07:33.771635 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27a1401-3ad1-40a3-9ce6-08cac86fef42" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.771670 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27a1401-3ad1-40a3-9ce6-08cac86fef42" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772584 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="proxy-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772625 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2fc5f74-3f4c-4988-aa1c-c2dd50aade79" containerName="galera" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772671 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772703 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772713 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e183cf-c0fe-4f94-9c03-7f8fa792c4af" containerName="kube-state-metrics" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772749 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772772 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a322a0-752b-4ab1-9418-41c4747eebee" containerName="nova-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772787 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="openstack-network-exporter" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772806 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c17cf84-2174-42d8-880a-9a643a161ef4" containerName="ovn-northd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772825 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c322124-103a-40d2-a429-f018076f88ff" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772836 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772853 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7f1bf7-3734-4c0e-afc2-d841cc97a529" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772903 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772934 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772963 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2aa6b69-49ff-46bb-b0f3-b1d9eca2823c" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.772983 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="812fa799-d734-4151-b87f-25d638295714" containerName="memcached" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773007 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-notification-agent" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773019 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="sg-core" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773038 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773066 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571" containerName="barbican-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773079 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42b962f-20f0-43d1-a1c4-c16c9392ec82" containerName="keystone-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773096 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad78a256-27f0-46a9-addb-dbc7b41bebd2" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773272 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="557a84eb-0882-44c1-b4db-7c8a19e1303d" containerName="cinder-api-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773301 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773330 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773349 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b9318b-cd6c-4dfc-90d4-44d5a0b86d8d" containerName="ovn-controller" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773364 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5624f474-dd54-4580-b816-f238cc733b5a" containerName="nova-metadata-metadata" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773378 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db82600-180c-4114-8006-551e1b566ce5" containerName="rabbitmq" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773396 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-api" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773443 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab69f1a2-78df-4097-a527-0b90345cdcfe" containerName="galera" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773455 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a302619-4a69-4e62-b7cb-6812b771f6d4" containerName="glance-httpd" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773481 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8121d7-7b10-44c5-9b43-9088b198f34c" containerName="nova-scheduler-scheduler" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773503 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e410de46-b373-431a-8486-21a6f1268e41" containerName="rabbitmq" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773531 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0e50c7-752e-4879-a382-ff97500cfd89" containerName="placement-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773548 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d121fee-d98a-4dcd-ba07-1d4b2015460d" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773560 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="784ffd24-69a7-4235-9d4d-4a1be6f183fd" containerName="nova-cell1-conductor-conductor" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773581 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="00825975-35eb-46d6-8aeb-753170564467" containerName="proxy-server" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773596 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27a1401-3ad1-40a3-9ce6-08cac86fef42" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773622 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ab71b1-8ff6-488c-9401-9b63341b08dd" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773659 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e062e50-5a22-45c0-adab-9f78980eb851" containerName="glance-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773687 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2b5602-0b26-4de1-ac2c-3606bd0aede3" containerName="barbican-worker-log" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773714 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="241c5e6f-d993-4c7a-90a2-1ae1786dbea2" containerName="barbican-keystone-listener" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773738 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8a1411-7582-4f42-8b5a-3b97cebd9254" containerName="mariadb-account-delete" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773765 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4fd62f-751c-4ba7-8582-3d953bdc0bf6" containerName="nova-cell0-conductor-conductor" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.773776 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="533da492-8f1f-4593-86bd-8d5b316bb897" containerName="ceilometer-central-agent" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.780270 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.785672 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwqp2"] Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.935830 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-utilities\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.935954 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfrq5\" (UniqueName: \"kubernetes.io/projected/c68383ab-5214-41ca-9c93-78f39958a7b7-kube-api-access-cfrq5\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:33 crc kubenswrapper[4900]: I1202 14:07:33.936074 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-catalog-content\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.037394 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfrq5\" (UniqueName: \"kubernetes.io/projected/c68383ab-5214-41ca-9c93-78f39958a7b7-kube-api-access-cfrq5\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.037837 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-catalog-content\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.037886 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-utilities\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.038296 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-catalog-content\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.038395 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-utilities\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.058278 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfrq5\" (UniqueName: \"kubernetes.io/projected/c68383ab-5214-41ca-9c93-78f39958a7b7-kube-api-access-cfrq5\") pod \"certified-operators-fwqp2\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.122862 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.610367 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwqp2"] Dec 02 14:07:34 crc kubenswrapper[4900]: I1202 14:07:34.696706 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerStarted","Data":"39aff58fab8a8aa6ad2ecd07c7b1a1c43dee4a3abe06529a8f9bcee913feae60"} Dec 02 14:07:35 crc kubenswrapper[4900]: I1202 14:07:35.709482 4900 generic.go:334] "Generic (PLEG): container finished" podID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerID="ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8" exitCode=0 Dec 02 14:07:35 crc kubenswrapper[4900]: I1202 14:07:35.711128 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerDied","Data":"ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8"} Dec 02 14:07:35 crc kubenswrapper[4900]: I1202 14:07:35.716765 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqcm" event={"ID":"4d1c4eca-52c1-4143-832e-b377e4415feb","Type":"ContainerStarted","Data":"1ebd71b6383c7be753484ded45f06f124a65c147e1a9fc0958a06f023c9a45a9"} Dec 02 14:07:35 crc kubenswrapper[4900]: I1202 14:07:35.781891 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2hqcm" podStartSLOduration=6.367358697 podStartE2EDuration="9.781873789s" podCreationTimestamp="2025-12-02 14:07:26 +0000 UTC" firstStartedPulling="2025-12-02 14:07:31.363007375 +0000 UTC m=+1496.778821236" lastFinishedPulling="2025-12-02 14:07:34.777522477 +0000 UTC m=+1500.193336328" observedRunningTime="2025-12-02 14:07:35.780267094 +0000 UTC m=+1501.196080965" watchObservedRunningTime="2025-12-02 14:07:35.781873789 +0000 UTC m=+1501.197687660" Dec 02 14:07:36 crc kubenswrapper[4900]: I1202 14:07:36.606749 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:36 crc kubenswrapper[4900]: I1202 14:07:36.607167 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:36 crc kubenswrapper[4900]: I1202 14:07:36.658247 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:36 crc kubenswrapper[4900]: I1202 14:07:36.745605 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerStarted","Data":"75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd"} Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.689287 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.689860 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.690069 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.690102 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.690625 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.691939 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.693040 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:37 crc kubenswrapper[4900]: E1202 14:07:37.693067 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:07:37 crc kubenswrapper[4900]: I1202 14:07:37.754754 4900 generic.go:334] "Generic (PLEG): container finished" podID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerID="75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd" exitCode=0 Dec 02 14:07:37 crc kubenswrapper[4900]: I1202 14:07:37.754844 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerDied","Data":"75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd"} Dec 02 14:07:39 crc kubenswrapper[4900]: I1202 14:07:39.779363 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerStarted","Data":"4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4"} Dec 02 14:07:39 crc kubenswrapper[4900]: I1202 14:07:39.781767 4900 generic.go:334] "Generic (PLEG): container finished" podID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerID="dbab360a13373b0f107811e32ac3f4e9da16fc04fec31450a8afa527c07e139b" exitCode=0 Dec 02 14:07:39 crc kubenswrapper[4900]: I1202 14:07:39.781806 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d9bd66cf-nlpm2" event={"ID":"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5","Type":"ContainerDied","Data":"dbab360a13373b0f107811e32ac3f4e9da16fc04fec31450a8afa527c07e139b"} Dec 02 14:07:39 crc kubenswrapper[4900]: I1202 14:07:39.806146 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fwqp2" podStartSLOduration=3.486506231 podStartE2EDuration="6.8061249s" podCreationTimestamp="2025-12-02 14:07:33 +0000 UTC" firstStartedPulling="2025-12-02 14:07:35.713407398 +0000 UTC m=+1501.129221259" lastFinishedPulling="2025-12-02 14:07:39.033026047 +0000 UTC m=+1504.448839928" observedRunningTime="2025-12-02 14:07:39.801452179 +0000 UTC m=+1505.217266080" watchObservedRunningTime="2025-12-02 14:07:39.8061249 +0000 UTC m=+1505.221938761" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.144494 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337412 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-public-tls-certs\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337512 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-internal-tls-certs\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337579 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-httpd-config\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337610 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlgxv\" (UniqueName: \"kubernetes.io/projected/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-kube-api-access-wlgxv\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337733 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-ovndb-tls-certs\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337771 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-config\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.337800 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-combined-ca-bundle\") pod \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\" (UID: \"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5\") " Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.346965 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.349038 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-kube-api-access-wlgxv" (OuterVolumeSpecName: "kube-api-access-wlgxv") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "kube-api-access-wlgxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.388490 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-config" (OuterVolumeSpecName: "config") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.391231 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.401937 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.408424 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.420866 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" (UID: "7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439829 4900 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439875 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439888 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlgxv\" (UniqueName: \"kubernetes.io/projected/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-kube-api-access-wlgxv\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439905 4900 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439919 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439931 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.439943 4900 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.802153 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d9bd66cf-nlpm2" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.803285 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d9bd66cf-nlpm2" event={"ID":"7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5","Type":"ContainerDied","Data":"9238dcf7f7f82b6dc7a50ba1a116dc4de238023a2ef150a8893f84fb27dbe133"} Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.803343 4900 scope.go:117] "RemoveContainer" containerID="ff24395ff17544005ed3b0c813dfed8d4179e1e8e38687a4303ee6b98024dcbd" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.843441 4900 scope.go:117] "RemoveContainer" containerID="dbab360a13373b0f107811e32ac3f4e9da16fc04fec31450a8afa527c07e139b" Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.878619 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d9bd66cf-nlpm2"] Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.886319 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d9bd66cf-nlpm2"] Dec 02 14:07:40 crc kubenswrapper[4900]: I1202 14:07:40.938147 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" path="/var/lib/kubelet/pods/7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5/volumes" Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.690196 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.690893 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.691369 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.691441 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.691620 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.695585 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.697787 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:42 crc kubenswrapper[4900]: E1202 14:07:42.697845 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:07:44 crc kubenswrapper[4900]: I1202 14:07:44.123198 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:44 crc kubenswrapper[4900]: I1202 14:07:44.125997 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:44 crc kubenswrapper[4900]: I1202 14:07:44.196726 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:44 crc kubenswrapper[4900]: I1202 14:07:44.980398 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:45 crc kubenswrapper[4900]: I1202 14:07:45.051158 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwqp2"] Dec 02 14:07:45 crc kubenswrapper[4900]: I1202 14:07:45.117349 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:07:45 crc kubenswrapper[4900]: I1202 14:07:45.117838 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:07:46 crc kubenswrapper[4900]: I1202 14:07:46.685816 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:46 crc kubenswrapper[4900]: I1202 14:07:46.844206 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqcm"] Dec 02 14:07:46 crc kubenswrapper[4900]: I1202 14:07:46.920065 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2hqcm" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="registry-server" containerID="cri-o://1ebd71b6383c7be753484ded45f06f124a65c147e1a9fc0958a06f023c9a45a9" gracePeriod=2 Dec 02 14:07:46 crc kubenswrapper[4900]: I1202 14:07:46.920182 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fwqp2" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="registry-server" containerID="cri-o://4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4" gracePeriod=2 Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.608743 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.689298 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.689636 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.690235 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.690278 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.690424 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.692235 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.693540 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 02 14:07:47 crc kubenswrapper[4900]: E1202 14:07:47.693577 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-9cwqh" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.788125 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-catalog-content\") pod \"c68383ab-5214-41ca-9c93-78f39958a7b7\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.788355 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfrq5\" (UniqueName: \"kubernetes.io/projected/c68383ab-5214-41ca-9c93-78f39958a7b7-kube-api-access-cfrq5\") pod \"c68383ab-5214-41ca-9c93-78f39958a7b7\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.788405 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-utilities\") pod \"c68383ab-5214-41ca-9c93-78f39958a7b7\" (UID: \"c68383ab-5214-41ca-9c93-78f39958a7b7\") " Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.789871 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-utilities" (OuterVolumeSpecName: "utilities") pod "c68383ab-5214-41ca-9c93-78f39958a7b7" (UID: "c68383ab-5214-41ca-9c93-78f39958a7b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.794750 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68383ab-5214-41ca-9c93-78f39958a7b7-kube-api-access-cfrq5" (OuterVolumeSpecName: "kube-api-access-cfrq5") pod "c68383ab-5214-41ca-9c93-78f39958a7b7" (UID: "c68383ab-5214-41ca-9c93-78f39958a7b7"). InnerVolumeSpecName "kube-api-access-cfrq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.890082 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfrq5\" (UniqueName: \"kubernetes.io/projected/c68383ab-5214-41ca-9c93-78f39958a7b7-kube-api-access-cfrq5\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.890121 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.931419 4900 generic.go:334] "Generic (PLEG): container finished" podID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerID="4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4" exitCode=0 Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.931471 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerDied","Data":"4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4"} Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.931497 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwqp2" event={"ID":"c68383ab-5214-41ca-9c93-78f39958a7b7","Type":"ContainerDied","Data":"39aff58fab8a8aa6ad2ecd07c7b1a1c43dee4a3abe06529a8f9bcee913feae60"} Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.931515 4900 scope.go:117] "RemoveContainer" containerID="4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.931624 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwqp2" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.936724 4900 generic.go:334] "Generic (PLEG): container finished" podID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerID="1ebd71b6383c7be753484ded45f06f124a65c147e1a9fc0958a06f023c9a45a9" exitCode=0 Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.936744 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqcm" event={"ID":"4d1c4eca-52c1-4143-832e-b377e4415feb","Type":"ContainerDied","Data":"1ebd71b6383c7be753484ded45f06f124a65c147e1a9fc0958a06f023c9a45a9"} Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.953502 4900 scope.go:117] "RemoveContainer" containerID="75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd" Dec 02 14:07:47 crc kubenswrapper[4900]: I1202 14:07:47.973533 4900 scope.go:117] "RemoveContainer" containerID="ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.012218 4900 scope.go:117] "RemoveContainer" containerID="4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4" Dec 02 14:07:48 crc kubenswrapper[4900]: E1202 14:07:48.012823 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4\": container with ID starting with 4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4 not found: ID does not exist" containerID="4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.012867 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4"} err="failed to get container status \"4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4\": rpc error: code = NotFound desc = could not find container \"4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4\": container with ID starting with 4a4c05c8036aca6a652a8dfc138308ce2970127441658c08ddd0d1387b1713c4 not found: ID does not exist" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.013023 4900 scope.go:117] "RemoveContainer" containerID="75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd" Dec 02 14:07:48 crc kubenswrapper[4900]: E1202 14:07:48.013376 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd\": container with ID starting with 75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd not found: ID does not exist" containerID="75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.013399 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd"} err="failed to get container status \"75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd\": rpc error: code = NotFound desc = could not find container \"75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd\": container with ID starting with 75aa2fd62ec6cce5714db68ad401dbdda7fa2ddd90beb59042af6b1850c171fd not found: ID does not exist" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.013414 4900 scope.go:117] "RemoveContainer" containerID="ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8" Dec 02 14:07:48 crc kubenswrapper[4900]: E1202 14:07:48.013656 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8\": container with ID starting with ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8 not found: ID does not exist" containerID="ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.013695 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8"} err="failed to get container status \"ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8\": rpc error: code = NotFound desc = could not find container \"ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8\": container with ID starting with ef178d3222cb2aa81f7c9eb98a6622bdaef5d335abcbd95c31cf836081d582d8 not found: ID does not exist" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.050955 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c68383ab-5214-41ca-9c93-78f39958a7b7" (UID: "c68383ab-5214-41ca-9c93-78f39958a7b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.093252 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68383ab-5214-41ca-9c93-78f39958a7b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.260756 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwqp2"] Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.265715 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fwqp2"] Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.486535 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.521145 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-catalog-content\") pod \"4d1c4eca-52c1-4143-832e-b377e4415feb\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.521315 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-utilities\") pod \"4d1c4eca-52c1-4143-832e-b377e4415feb\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.521417 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-259h6\" (UniqueName: \"kubernetes.io/projected/4d1c4eca-52c1-4143-832e-b377e4415feb-kube-api-access-259h6\") pod \"4d1c4eca-52c1-4143-832e-b377e4415feb\" (UID: \"4d1c4eca-52c1-4143-832e-b377e4415feb\") " Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.523839 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-utilities" (OuterVolumeSpecName: "utilities") pod "4d1c4eca-52c1-4143-832e-b377e4415feb" (UID: "4d1c4eca-52c1-4143-832e-b377e4415feb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.526825 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1c4eca-52c1-4143-832e-b377e4415feb-kube-api-access-259h6" (OuterVolumeSpecName: "kube-api-access-259h6") pod "4d1c4eca-52c1-4143-832e-b377e4415feb" (UID: "4d1c4eca-52c1-4143-832e-b377e4415feb"). InnerVolumeSpecName "kube-api-access-259h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.549151 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d1c4eca-52c1-4143-832e-b377e4415feb" (UID: "4d1c4eca-52c1-4143-832e-b377e4415feb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.623464 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.623520 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-259h6\" (UniqueName: \"kubernetes.io/projected/4d1c4eca-52c1-4143-832e-b377e4415feb-kube-api-access-259h6\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.623542 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1c4eca-52c1-4143-832e-b377e4415feb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.936139 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" path="/var/lib/kubelet/pods/c68383ab-5214-41ca-9c93-78f39958a7b7/volumes" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.953407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hqcm" event={"ID":"4d1c4eca-52c1-4143-832e-b377e4415feb","Type":"ContainerDied","Data":"85fb84cf0eb6c56ce31072e1cf5434eb9709a5614c1fb107a2513b25779083b7"} Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.953486 4900 scope.go:117] "RemoveContainer" containerID="1ebd71b6383c7be753484ded45f06f124a65c147e1a9fc0958a06f023c9a45a9" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.954028 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hqcm" Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.985598 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqcm"] Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.992511 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hqcm"] Dec 02 14:07:48 crc kubenswrapper[4900]: I1202 14:07:48.993263 4900 scope.go:117] "RemoveContainer" containerID="0d4962ac1c137968a6d0fae98020211c840caaced554851bda72133c7c4f59b4" Dec 02 14:07:49 crc kubenswrapper[4900]: I1202 14:07:49.031460 4900 scope.go:117] "RemoveContainer" containerID="38fe3996b8fc9746a9b766644272a3c9c0a2340267110541a84e74475fb110b6" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.852932 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zqrj5"] Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853661 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="extract-utilities" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853677 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="extract-utilities" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853695 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="extract-content" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853702 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="extract-content" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853728 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-httpd" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853734 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-httpd" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853743 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="registry-server" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853751 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="registry-server" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853761 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-api" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853766 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-api" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853781 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="registry-server" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853790 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="registry-server" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853801 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="extract-content" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853808 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="extract-content" Dec 02 14:07:50 crc kubenswrapper[4900]: E1202 14:07:50.853817 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="extract-utilities" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853822 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="extract-utilities" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853950 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68383ab-5214-41ca-9c93-78f39958a7b7" containerName="registry-server" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853962 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" containerName="registry-server" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853973 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-api" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.853984 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6758bc-fb2a-44e4-8d0a-bc71fcc678f5" containerName="neutron-httpd" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.855033 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.881012 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqrj5"] Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.922119 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1c4eca-52c1-4143-832e-b377e4415feb" path="/var/lib/kubelet/pods/4d1c4eca-52c1-4143-832e-b377e4415feb/volumes" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.969000 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-catalog-content\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.969059 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96qx\" (UniqueName: \"kubernetes.io/projected/0be36eb4-f760-4583-aaf9-104f96592096-kube-api-access-f96qx\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:50 crc kubenswrapper[4900]: I1202 14:07:50.969101 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-utilities\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.070995 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-catalog-content\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.071068 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96qx\" (UniqueName: \"kubernetes.io/projected/0be36eb4-f760-4583-aaf9-104f96592096-kube-api-access-f96qx\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.071143 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-utilities\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.071625 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-catalog-content\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.071675 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-utilities\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.089089 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96qx\" (UniqueName: \"kubernetes.io/projected/0be36eb4-f760-4583-aaf9-104f96592096-kube-api-access-f96qx\") pod \"community-operators-zqrj5\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.192175 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.683577 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqrj5"] Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.900498 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9cwqh_f79247d6-28ab-4234-a191-8799418aa3ea/ovs-vswitchd/0.log" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.901490 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.993659 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrj5" event={"ID":"0be36eb4-f760-4583-aaf9-104f96592096","Type":"ContainerStarted","Data":"b683a38bcad83f8496a11c31c6140668f7627d1852d1771ac90c8ef094293f11"} Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.995411 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9cwqh_f79247d6-28ab-4234-a191-8799418aa3ea/ovs-vswitchd/0.log" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.996178 4900 generic.go:334] "Generic (PLEG): container finished" podID="f79247d6-28ab-4234-a191-8799418aa3ea" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" exitCode=137 Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.996229 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerDied","Data":"2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27"} Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.996247 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9cwqh" event={"ID":"f79247d6-28ab-4234-a191-8799418aa3ea","Type":"ContainerDied","Data":"c69a2ac94588ffbd4f09ec232f2e5d0bdddb93f20ea02ec34a5f1d1973fc1ec8"} Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.996264 4900 scope.go:117] "RemoveContainer" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" Dec 02 14:07:51 crc kubenswrapper[4900]: I1202 14:07:51.996294 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9cwqh" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.003722 4900 generic.go:334] "Generic (PLEG): container finished" podID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerID="6f12993e1fc195acb36a4222c9e80cc1d4aeaa566382dddf8b897df3ae681468" exitCode=137 Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.003760 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"6f12993e1fc195acb36a4222c9e80cc1d4aeaa566382dddf8b897df3ae681468"} Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.053396 4900 scope.go:117] "RemoveContainer" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.074520 4900 scope.go:117] "RemoveContainer" containerID="83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086467 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-etc-ovs\") pod \"f79247d6-28ab-4234-a191-8799418aa3ea\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086539 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "f79247d6-28ab-4234-a191-8799418aa3ea" (UID: "f79247d6-28ab-4234-a191-8799418aa3ea"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086550 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79247d6-28ab-4234-a191-8799418aa3ea-scripts\") pod \"f79247d6-28ab-4234-a191-8799418aa3ea\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086705 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-lib\") pod \"f79247d6-28ab-4234-a191-8799418aa3ea\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086747 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7mb\" (UniqueName: \"kubernetes.io/projected/f79247d6-28ab-4234-a191-8799418aa3ea-kube-api-access-hr7mb\") pod \"f79247d6-28ab-4234-a191-8799418aa3ea\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086776 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-run\") pod \"f79247d6-28ab-4234-a191-8799418aa3ea\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086822 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-log\") pod \"f79247d6-28ab-4234-a191-8799418aa3ea\" (UID: \"f79247d6-28ab-4234-a191-8799418aa3ea\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086916 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-run" (OuterVolumeSpecName: "var-run") pod "f79247d6-28ab-4234-a191-8799418aa3ea" (UID: "f79247d6-28ab-4234-a191-8799418aa3ea"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.086878 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-lib" (OuterVolumeSpecName: "var-lib") pod "f79247d6-28ab-4234-a191-8799418aa3ea" (UID: "f79247d6-28ab-4234-a191-8799418aa3ea"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.087003 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-log" (OuterVolumeSpecName: "var-log") pod "f79247d6-28ab-4234-a191-8799418aa3ea" (UID: "f79247d6-28ab-4234-a191-8799418aa3ea"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.087444 4900 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-etc-ovs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.087479 4900 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-lib\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.087501 4900 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.087518 4900 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f79247d6-28ab-4234-a191-8799418aa3ea-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.088178 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79247d6-28ab-4234-a191-8799418aa3ea-scripts" (OuterVolumeSpecName: "scripts") pod "f79247d6-28ab-4234-a191-8799418aa3ea" (UID: "f79247d6-28ab-4234-a191-8799418aa3ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.092833 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79247d6-28ab-4234-a191-8799418aa3ea-kube-api-access-hr7mb" (OuterVolumeSpecName: "kube-api-access-hr7mb") pod "f79247d6-28ab-4234-a191-8799418aa3ea" (UID: "f79247d6-28ab-4234-a191-8799418aa3ea"). InnerVolumeSpecName "kube-api-access-hr7mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.102688 4900 scope.go:117] "RemoveContainer" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" Dec 02 14:07:52 crc kubenswrapper[4900]: E1202 14:07:52.103141 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27\": container with ID starting with 2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27 not found: ID does not exist" containerID="2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.103189 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27"} err="failed to get container status \"2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27\": rpc error: code = NotFound desc = could not find container \"2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27\": container with ID starting with 2682f01623fdd5e296ecf2d701e685442599c9d88bebb672e8ced17e5cd04e27 not found: ID does not exist" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.103217 4900 scope.go:117] "RemoveContainer" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" Dec 02 14:07:52 crc kubenswrapper[4900]: E1202 14:07:52.103494 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305\": container with ID starting with 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 not found: ID does not exist" containerID="231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.103552 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305"} err="failed to get container status \"231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305\": rpc error: code = NotFound desc = could not find container \"231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305\": container with ID starting with 231915ba635460a43b12a3bf32d61fcb230d8b8ea03df7231ce1101abc69d305 not found: ID does not exist" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.103602 4900 scope.go:117] "RemoveContainer" containerID="83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6" Dec 02 14:07:52 crc kubenswrapper[4900]: E1202 14:07:52.104379 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6\": container with ID starting with 83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6 not found: ID does not exist" containerID="83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.104448 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6"} err="failed to get container status \"83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6\": rpc error: code = NotFound desc = could not find container \"83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6\": container with ID starting with 83ab219a7ce43086be70584e9bae279679de06f4a3ea7ab91c217c86c0af2dd6 not found: ID does not exist" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.190192 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79247d6-28ab-4234-a191-8799418aa3ea-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.190253 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr7mb\" (UniqueName: \"kubernetes.io/projected/f79247d6-28ab-4234-a191-8799418aa3ea-kube-api-access-hr7mb\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.322982 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-9cwqh"] Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.327937 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-9cwqh"] Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.523382 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.698602 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") pod \"305da939-8e7b-4fce-95f9-95d90218a1f0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.698742 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwxjx\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-kube-api-access-qwxjx\") pod \"305da939-8e7b-4fce-95f9-95d90218a1f0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.698783 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"305da939-8e7b-4fce-95f9-95d90218a1f0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.698830 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-cache\") pod \"305da939-8e7b-4fce-95f9-95d90218a1f0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.698914 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-lock\") pod \"305da939-8e7b-4fce-95f9-95d90218a1f0\" (UID: \"305da939-8e7b-4fce-95f9-95d90218a1f0\") " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.699466 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-cache" (OuterVolumeSpecName: "cache") pod "305da939-8e7b-4fce-95f9-95d90218a1f0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.699501 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-lock" (OuterVolumeSpecName: "lock") pod "305da939-8e7b-4fce-95f9-95d90218a1f0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.703213 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "305da939-8e7b-4fce-95f9-95d90218a1f0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.703360 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-kube-api-access-qwxjx" (OuterVolumeSpecName: "kube-api-access-qwxjx") pod "305da939-8e7b-4fce-95f9-95d90218a1f0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0"). InnerVolumeSpecName "kube-api-access-qwxjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.703908 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "305da939-8e7b-4fce-95f9-95d90218a1f0" (UID: "305da939-8e7b-4fce-95f9-95d90218a1f0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.801715 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwxjx\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-kube-api-access-qwxjx\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.801818 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.801842 4900 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-cache\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.801862 4900 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/305da939-8e7b-4fce-95f9-95d90218a1f0-lock\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.801884 4900 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/305da939-8e7b-4fce-95f9-95d90218a1f0-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.815894 4900 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.904045 4900 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:52 crc kubenswrapper[4900]: I1202 14:07:52.942967 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" path="/var/lib/kubelet/pods/f79247d6-28ab-4234-a191-8799418aa3ea/volumes" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.028124 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"305da939-8e7b-4fce-95f9-95d90218a1f0","Type":"ContainerDied","Data":"a02baaad4629d7d0beb5a9e4cdf39ed709a63466f9a823870e16e8f6f9fa9d3a"} Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.028199 4900 scope.go:117] "RemoveContainer" containerID="6f12993e1fc195acb36a4222c9e80cc1d4aeaa566382dddf8b897df3ae681468" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.028432 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.031871 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be36eb4-f760-4583-aaf9-104f96592096" containerID="55c94d7963bd40363b2dc323dab73378452ab4b675cda236a885da7fe2c32aee" exitCode=0 Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.031875 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrj5" event={"ID":"0be36eb4-f760-4583-aaf9-104f96592096","Type":"ContainerDied","Data":"55c94d7963bd40363b2dc323dab73378452ab4b675cda236a885da7fe2c32aee"} Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.063443 4900 scope.go:117] "RemoveContainer" containerID="92b595b2d89b2be8cfc2216546011c9aad218c2d134cbf0d7dd2eeded97e32ae" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.084235 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.092820 4900 scope.go:117] "RemoveContainer" containerID="650e07decb4d0921b10393aec4c8765f7b943d7fb39cad739dc92c08bc0cf83c" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.099457 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.147288 4900 scope.go:117] "RemoveContainer" containerID="5821e46042485c1373fa8cae7c61b288c7c4cea999d146348d992d1f1ebe01ae" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.172202 4900 scope.go:117] "RemoveContainer" containerID="b9903237aae7b30e4786154f26720bc4cccb8456c76a64b913e79db33e9723cc" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.207313 4900 scope.go:117] "RemoveContainer" containerID="25219f1dbf2d7a01dd6cfe25cfa91ecaaaabec4527f8896e9ed0b10b42b25db3" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.245556 4900 scope.go:117] "RemoveContainer" containerID="7f2a46fb8785892c4a865fae00d8ed6142ab75fae046b42634d84a99c5fcf69d" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.273820 4900 scope.go:117] "RemoveContainer" containerID="8e6a67bb6f1294f115624e7162a130f3eabff83ef59d7b2a1a87dc5e03f7e6e7" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.298694 4900 scope.go:117] "RemoveContainer" containerID="38eff436fe11c5890e207833fe423224c1e521b3b82a519361fefcff2af660ad" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.319224 4900 scope.go:117] "RemoveContainer" containerID="46f15348813d8055006838bad9d40dbb909e9eabfc30521e1baeaf728552da63" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.351434 4900 scope.go:117] "RemoveContainer" containerID="bf6dbc2d90f268fe7fed54cad255fdedb06111980e4d028a6b734115fcd4bff2" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.378768 4900 scope.go:117] "RemoveContainer" containerID="10a9aaa8d1a2413e0ef899da8043a3d293c39ba29883684daac125f654e247c6" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.408691 4900 scope.go:117] "RemoveContainer" containerID="535a4b01d9acc099e8e0cf36306f3d1613b8d40a0a2886c27a5e3adb4d22106c" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.433454 4900 scope.go:117] "RemoveContainer" containerID="304355d78e40f6ca3b22a607c420ecdb93fd14f1a0a1d10ee78e70aca9138742" Dec 02 14:07:53 crc kubenswrapper[4900]: I1202 14:07:53.474051 4900 scope.go:117] "RemoveContainer" containerID="5e0242301bbd13a18a7ab682fc5ef7d58a6f6c86146abab5ab241882c022c72e" Dec 02 14:07:54 crc kubenswrapper[4900]: I1202 14:07:54.926083 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" path="/var/lib/kubelet/pods/305da939-8e7b-4fce-95f9-95d90218a1f0/volumes" Dec 02 14:07:55 crc kubenswrapper[4900]: I1202 14:07:55.071343 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be36eb4-f760-4583-aaf9-104f96592096" containerID="648168731c82dd7fc4f92873d2e1de2642d5bea6a0000385ebd4392d16744c3a" exitCode=0 Dec 02 14:07:55 crc kubenswrapper[4900]: I1202 14:07:55.071412 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrj5" event={"ID":"0be36eb4-f760-4583-aaf9-104f96592096","Type":"ContainerDied","Data":"648168731c82dd7fc4f92873d2e1de2642d5bea6a0000385ebd4392d16744c3a"} Dec 02 14:07:57 crc kubenswrapper[4900]: I1202 14:07:57.103572 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrj5" event={"ID":"0be36eb4-f760-4583-aaf9-104f96592096","Type":"ContainerStarted","Data":"704c20a8e7edf78bfc0d4b72f8940e3c798e3825bdceede0062006db43cde5eb"} Dec 02 14:07:57 crc kubenswrapper[4900]: I1202 14:07:57.132050 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zqrj5" podStartSLOduration=3.924593036 podStartE2EDuration="7.132028877s" podCreationTimestamp="2025-12-02 14:07:50 +0000 UTC" firstStartedPulling="2025-12-02 14:07:53.034638224 +0000 UTC m=+1518.450452075" lastFinishedPulling="2025-12-02 14:07:56.242074025 +0000 UTC m=+1521.657887916" observedRunningTime="2025-12-02 14:07:57.123147788 +0000 UTC m=+1522.538961679" watchObservedRunningTime="2025-12-02 14:07:57.132028877 +0000 UTC m=+1522.547842748" Dec 02 14:07:57 crc kubenswrapper[4900]: I1202 14:07:57.143604 4900 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podad78a256-27f0-46a9-addb-dbc7b41bebd2"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podad78a256-27f0-46a9-addb-dbc7b41bebd2] : Timed out while waiting for systemd to remove kubepods-besteffort-podad78a256_27f0_46a9_addb_dbc7b41bebd2.slice" Dec 02 14:07:57 crc kubenswrapper[4900]: I1202 14:07:57.151782 4900 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9dc80fa7-9bd3-4ed5-81d3-dfb8caa08571] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9dc80fa7_9bd3_4ed5_81d3_dfb8caa08571.slice" Dec 02 14:08:01 crc kubenswrapper[4900]: I1202 14:08:01.193036 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:08:01 crc kubenswrapper[4900]: I1202 14:08:01.193671 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:08:01 crc kubenswrapper[4900]: I1202 14:08:01.272820 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:08:02 crc kubenswrapper[4900]: I1202 14:08:02.229980 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:08:02 crc kubenswrapper[4900]: I1202 14:08:02.292206 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqrj5"] Dec 02 14:08:04 crc kubenswrapper[4900]: I1202 14:08:04.178690 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zqrj5" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="registry-server" containerID="cri-o://704c20a8e7edf78bfc0d4b72f8940e3c798e3825bdceede0062006db43cde5eb" gracePeriod=2 Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.227685 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be36eb4-f760-4583-aaf9-104f96592096" containerID="704c20a8e7edf78bfc0d4b72f8940e3c798e3825bdceede0062006db43cde5eb" exitCode=0 Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.227731 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrj5" event={"ID":"0be36eb4-f760-4583-aaf9-104f96592096","Type":"ContainerDied","Data":"704c20a8e7edf78bfc0d4b72f8940e3c798e3825bdceede0062006db43cde5eb"} Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.529782 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.626544 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f96qx\" (UniqueName: \"kubernetes.io/projected/0be36eb4-f760-4583-aaf9-104f96592096-kube-api-access-f96qx\") pod \"0be36eb4-f760-4583-aaf9-104f96592096\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.626945 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-catalog-content\") pod \"0be36eb4-f760-4583-aaf9-104f96592096\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.627029 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-utilities\") pod \"0be36eb4-f760-4583-aaf9-104f96592096\" (UID: \"0be36eb4-f760-4583-aaf9-104f96592096\") " Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.628731 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-utilities" (OuterVolumeSpecName: "utilities") pod "0be36eb4-f760-4583-aaf9-104f96592096" (UID: "0be36eb4-f760-4583-aaf9-104f96592096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.646180 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be36eb4-f760-4583-aaf9-104f96592096-kube-api-access-f96qx" (OuterVolumeSpecName: "kube-api-access-f96qx") pod "0be36eb4-f760-4583-aaf9-104f96592096" (UID: "0be36eb4-f760-4583-aaf9-104f96592096"). InnerVolumeSpecName "kube-api-access-f96qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.685656 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0be36eb4-f760-4583-aaf9-104f96592096" (UID: "0be36eb4-f760-4583-aaf9-104f96592096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.728927 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f96qx\" (UniqueName: \"kubernetes.io/projected/0be36eb4-f760-4583-aaf9-104f96592096-kube-api-access-f96qx\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.728956 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:06 crc kubenswrapper[4900]: I1202 14:08:06.728964 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be36eb4-f760-4583-aaf9-104f96592096-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.240464 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrj5" event={"ID":"0be36eb4-f760-4583-aaf9-104f96592096","Type":"ContainerDied","Data":"b683a38bcad83f8496a11c31c6140668f7627d1852d1771ac90c8ef094293f11"} Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.240556 4900 scope.go:117] "RemoveContainer" containerID="704c20a8e7edf78bfc0d4b72f8940e3c798e3825bdceede0062006db43cde5eb" Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.241747 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrj5" Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.269634 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqrj5"] Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.277059 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zqrj5"] Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.291441 4900 scope.go:117] "RemoveContainer" containerID="648168731c82dd7fc4f92873d2e1de2642d5bea6a0000385ebd4392d16744c3a" Dec 02 14:08:07 crc kubenswrapper[4900]: I1202 14:08:07.344721 4900 scope.go:117] "RemoveContainer" containerID="55c94d7963bd40363b2dc323dab73378452ab4b675cda236a885da7fe2c32aee" Dec 02 14:08:08 crc kubenswrapper[4900]: I1202 14:08:08.928017 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be36eb4-f760-4583-aaf9-104f96592096" path="/var/lib/kubelet/pods/0be36eb4-f760-4583-aaf9-104f96592096/volumes" Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.116573 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.117303 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.117382 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.118350 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.118451 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" gracePeriod=600 Dec 02 14:08:15 crc kubenswrapper[4900]: E1202 14:08:15.255444 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.363189 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" exitCode=0 Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.363230 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1"} Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.363309 4900 scope.go:117] "RemoveContainer" containerID="71201562a586bb41b092fbbc0aed881de288c0da40461c0877afbe0f47cb3b45" Dec 02 14:08:15 crc kubenswrapper[4900]: I1202 14:08:15.364008 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:08:15 crc kubenswrapper[4900]: E1202 14:08:15.364365 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:08:29 crc kubenswrapper[4900]: I1202 14:08:29.909932 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:08:29 crc kubenswrapper[4900]: E1202 14:08:29.910631 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:08:40 crc kubenswrapper[4900]: I1202 14:08:40.910151 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:08:40 crc kubenswrapper[4900]: E1202 14:08:40.911235 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:08:55 crc kubenswrapper[4900]: I1202 14:08:55.911053 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:08:55 crc kubenswrapper[4900]: E1202 14:08:55.912257 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.601133 4900 scope.go:117] "RemoveContainer" containerID="b0df1054bf4eef242497c25a69090f320c164e3ebb388b369e71485192e05d15" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.641372 4900 scope.go:117] "RemoveContainer" containerID="c630f99583cbed16ffab5fc588ae2a735afc6b676c53185c229cd038d453fe1b" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.696162 4900 scope.go:117] "RemoveContainer" containerID="0c0293081205009aa4d4b75d64d6c80b5931991b50a6c6d78594b61086ccc082" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.720405 4900 scope.go:117] "RemoveContainer" containerID="4ee103e5b9c065503521c96cc354e2b1419987ab208022be75f831cef4f7e7a6" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.776903 4900 scope.go:117] "RemoveContainer" containerID="f9bf858d09a52dc8dcf968ca816270985911f7bf9e4f15a8be6978b0c25c46b5" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.807222 4900 scope.go:117] "RemoveContainer" containerID="cbb6d068dcd832de001c1d6ffc2cef4ef552752c3ab552a513c3bee122eaee7c" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.845760 4900 scope.go:117] "RemoveContainer" containerID="c529d208dadf5ed6b32c89601f28c3dcd038273fd7d9fdc091fb6f954fe330e7" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.874081 4900 scope.go:117] "RemoveContainer" containerID="802a012ebaf25a3865974158aa1e628674a0cf876aad0fc0d5862b083675bce5" Dec 02 14:08:56 crc kubenswrapper[4900]: I1202 14:08:56.897064 4900 scope.go:117] "RemoveContainer" containerID="e4169f4de5b556c9ce6324a489231fb4175368c98080d2ae68fb1b574e2bdaf0" Dec 02 14:09:08 crc kubenswrapper[4900]: I1202 14:09:08.910946 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:09:08 crc kubenswrapper[4900]: E1202 14:09:08.911690 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:09:21 crc kubenswrapper[4900]: I1202 14:09:21.910613 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:09:21 crc kubenswrapper[4900]: E1202 14:09:21.911278 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:09:34 crc kubenswrapper[4900]: I1202 14:09:34.918591 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:09:34 crc kubenswrapper[4900]: E1202 14:09:34.919617 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:09:49 crc kubenswrapper[4900]: I1202 14:09:49.911315 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:09:49 crc kubenswrapper[4900]: E1202 14:09:49.912555 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.139902 4900 scope.go:117] "RemoveContainer" containerID="f3e31a0037c3864d8fbc36b0641f6e02b3881334b51f070abf36139382054733" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.181454 4900 scope.go:117] "RemoveContainer" containerID="baef9a5f21779e2a4b34b8427e41273cb4427c3539b16cff9e16ec581a7c2e72" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.225727 4900 scope.go:117] "RemoveContainer" containerID="83190c57d144ed6efda2e3201abfd9fa166f38ec80761516c2e6b861cd107366" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.262308 4900 scope.go:117] "RemoveContainer" containerID="228abee1524ca28344e96e640d2ce55eb6b955c5f7389ea75a0f70333d613ab6" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.290114 4900 scope.go:117] "RemoveContainer" containerID="8087a9560c219873c90a9d53097b96c8fded5db305f1645c12fdc53e707047ef" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.339677 4900 scope.go:117] "RemoveContainer" containerID="a7e40440ff834859e3f71bef147e2660006177005143793d11dd69ed36a33d40" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.386258 4900 scope.go:117] "RemoveContainer" containerID="12ecfd9dec64506cfb20c1aa9db5f5e504ac20447db837d5296f0bc7d6ba2db1" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.413461 4900 scope.go:117] "RemoveContainer" containerID="82fed4eee43ecffe9601676c7729bfdbd701095eacc9481a8ceb34fd4b3b0c5d" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.439460 4900 scope.go:117] "RemoveContainer" containerID="aaa4c89b780f334d06fd958b796370ba5e407ef9eb4f3e2ec808d619b9abf8d4" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.471009 4900 scope.go:117] "RemoveContainer" containerID="73133b2ba4989cd7f78a761541a674e62c5e36785b7fa274c71ca103d32fcf1c" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.532754 4900 scope.go:117] "RemoveContainer" containerID="db79db95d70a03b981ec7d9c93ec8d397d6d181c35dc0babe9571c80f20a28b3" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.571186 4900 scope.go:117] "RemoveContainer" containerID="6801577ba120fc2235b74ff52e7d832dc6e19cb77eed7e40406ab29bbc2e5f28" Dec 02 14:09:57 crc kubenswrapper[4900]: I1202 14:09:57.621166 4900 scope.go:117] "RemoveContainer" containerID="a1ffea810a8add4b42ba35ba2e8c0050d0718defcd9ccaeab6fc931cff075942" Dec 02 14:10:01 crc kubenswrapper[4900]: I1202 14:10:01.910330 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:10:01 crc kubenswrapper[4900]: E1202 14:10:01.911128 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:10:14 crc kubenswrapper[4900]: I1202 14:10:14.919216 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:10:14 crc kubenswrapper[4900]: E1202 14:10:14.920920 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:10:28 crc kubenswrapper[4900]: I1202 14:10:28.917893 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:10:28 crc kubenswrapper[4900]: E1202 14:10:28.918729 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:10:42 crc kubenswrapper[4900]: I1202 14:10:42.911155 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:10:42 crc kubenswrapper[4900]: E1202 14:10:42.912409 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:10:53 crc kubenswrapper[4900]: I1202 14:10:53.910693 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:10:53 crc kubenswrapper[4900]: E1202 14:10:53.911636 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:10:57 crc kubenswrapper[4900]: I1202 14:10:57.842991 4900 scope.go:117] "RemoveContainer" containerID="7e2a5e62fdfd6d58e261c3a59e316fbb35eb6ecbeecd1f0c5b424484dcb2b5d5" Dec 02 14:10:57 crc kubenswrapper[4900]: I1202 14:10:57.895337 4900 scope.go:117] "RemoveContainer" containerID="cecd536d3b91c6e7b5527b0076daff8705116d48966a3a954f574f3ce05fbc95" Dec 02 14:10:57 crc kubenswrapper[4900]: I1202 14:10:57.920848 4900 scope.go:117] "RemoveContainer" containerID="abf93b4af2ac9dd4692589722eae2b650829168ac6beae85279f3137d14413fd" Dec 02 14:10:57 crc kubenswrapper[4900]: I1202 14:10:57.949579 4900 scope.go:117] "RemoveContainer" containerID="8622714b4af10f4679714dcef75b51b02fa214a81bae7f1181c185ad90cc0387" Dec 02 14:10:57 crc kubenswrapper[4900]: I1202 14:10:57.972197 4900 scope.go:117] "RemoveContainer" containerID="0ebf754234916bfca27372570d7a573ae167129a9fe1251644b771d38e0378ca" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.000761 4900 scope.go:117] "RemoveContainer" containerID="367b246fd004b35aecff3af708a39d75f5432904e0781ed90281b57adfd5a473" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.052417 4900 scope.go:117] "RemoveContainer" containerID="5e8d8d1fb36f3a6f727fa7460d868495ea6d53445bd33ed40121a782230a4712" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.080556 4900 scope.go:117] "RemoveContainer" containerID="8ef8394ade07a5f166cf2d4cc7e5c2c73d32bd419edf16d32c29571935cddf7d" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.109846 4900 scope.go:117] "RemoveContainer" containerID="75b061719509895544c5101526474e3593e712a83ea8bf19f5d39b1e05838e7d" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.126992 4900 scope.go:117] "RemoveContainer" containerID="2f05d6a11986157bd558c9b03bfdb6ec1225bad183c6930efe3b31debce11410" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.149736 4900 scope.go:117] "RemoveContainer" containerID="add50ad9f64a2bdf1221d6b96af94f563bea1cd4fc5cd462eed8e1f992bf323e" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.165219 4900 scope.go:117] "RemoveContainer" containerID="683ad46b79d8da86e3dc06d5fc634651aa5b590466fe3ab013890ca87d56975d" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.187284 4900 scope.go:117] "RemoveContainer" containerID="8200837b67485781a599b717c166bbbea97823d4fef16a94bd63ede91ddd6205" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.206554 4900 scope.go:117] "RemoveContainer" containerID="17feb893704561d9ac1181affdd466264eef60e1c8a36bef9d6bfe975eaed6b6" Dec 02 14:10:58 crc kubenswrapper[4900]: I1202 14:10:58.226245 4900 scope.go:117] "RemoveContainer" containerID="9b9a1adc9bf6bc055b6124dbfb0cf0940f73cceff1fb98ba82f90ebb4fa7c9e3" Dec 02 14:11:07 crc kubenswrapper[4900]: I1202 14:11:07.910882 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:11:07 crc kubenswrapper[4900]: E1202 14:11:07.912029 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:11:20 crc kubenswrapper[4900]: I1202 14:11:20.910240 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:11:20 crc kubenswrapper[4900]: E1202 14:11:20.911784 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:11:33 crc kubenswrapper[4900]: I1202 14:11:33.910742 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:11:33 crc kubenswrapper[4900]: E1202 14:11:33.911676 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:11:47 crc kubenswrapper[4900]: I1202 14:11:47.910351 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:11:47 crc kubenswrapper[4900]: E1202 14:11:47.911087 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:11:58 crc kubenswrapper[4900]: I1202 14:11:58.413731 4900 scope.go:117] "RemoveContainer" containerID="fc5c9018b82428ef5e5f1304fdb9ff60e961aa187b7f46a2cef4f807d0b2ce75" Dec 02 14:11:58 crc kubenswrapper[4900]: I1202 14:11:58.438697 4900 scope.go:117] "RemoveContainer" containerID="7ca56bba94a87eec7b0bc4f9d045dadc3d8e23854ce2eca32ac737bea66f175a" Dec 02 14:11:58 crc kubenswrapper[4900]: I1202 14:11:58.490214 4900 scope.go:117] "RemoveContainer" containerID="c520b85a547bf13b1e83e66f7b5a8281322844a8b7d7de8587e8cf1aec9a5943" Dec 02 14:11:58 crc kubenswrapper[4900]: I1202 14:11:58.518929 4900 scope.go:117] "RemoveContainer" containerID="c62bcbaedf33f7593499ebbd156d4d7ef64b82ebebff13a7f3d1815cbb3551b2" Dec 02 14:11:58 crc kubenswrapper[4900]: I1202 14:11:58.564900 4900 scope.go:117] "RemoveContainer" containerID="47cd53e70fc8f37391f555876fc14fd173fb43c514c6588428b1c4d99716e7be" Dec 02 14:12:01 crc kubenswrapper[4900]: I1202 14:12:01.910513 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:12:01 crc kubenswrapper[4900]: E1202 14:12:01.911717 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:12:12 crc kubenswrapper[4900]: I1202 14:12:12.910211 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:12:12 crc kubenswrapper[4900]: E1202 14:12:12.911310 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:12:24 crc kubenswrapper[4900]: I1202 14:12:24.917925 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:12:24 crc kubenswrapper[4900]: E1202 14:12:24.921164 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:12:37 crc kubenswrapper[4900]: I1202 14:12:37.909809 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:12:37 crc kubenswrapper[4900]: E1202 14:12:37.910852 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:12:52 crc kubenswrapper[4900]: I1202 14:12:52.910523 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:12:52 crc kubenswrapper[4900]: E1202 14:12:52.911612 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:12:58 crc kubenswrapper[4900]: I1202 14:12:58.712126 4900 scope.go:117] "RemoveContainer" containerID="8a82c0a1aec9e86de7bbdef845e617555b59d19b58b5b8cd0eeb0e65e22aae77" Dec 02 14:12:58 crc kubenswrapper[4900]: I1202 14:12:58.771566 4900 scope.go:117] "RemoveContainer" containerID="88407bc6bd8e2dfedf02e3b155acca5b6726043dcabd9f0d892d020af43e5c9f" Dec 02 14:13:05 crc kubenswrapper[4900]: I1202 14:13:05.910157 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:13:05 crc kubenswrapper[4900]: E1202 14:13:05.911228 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:13:19 crc kubenswrapper[4900]: I1202 14:13:19.910715 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:13:20 crc kubenswrapper[4900]: I1202 14:13:20.851573 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"94687312b39445e3e3773c7f46fcd531902e8d2a0dd075059241741de4331599"} Dec 02 14:13:58 crc kubenswrapper[4900]: I1202 14:13:58.845635 4900 scope.go:117] "RemoveContainer" containerID="1f5e54223736749411f69a0010169896be9ba0e0109828ad0b779c7a752eb6b2" Dec 02 14:13:58 crc kubenswrapper[4900]: I1202 14:13:58.872098 4900 scope.go:117] "RemoveContainer" containerID="bfdc120ba1bdb391645cff021176334436c917a7367050aa573f3c24192b27ef" Dec 02 14:13:58 crc kubenswrapper[4900]: I1202 14:13:58.925289 4900 scope.go:117] "RemoveContainer" containerID="29c7e3b447f8e90946a42aa75df7262af18b7cbe70118f4026c7a4203713e35f" Dec 02 14:13:58 crc kubenswrapper[4900]: I1202 14:13:58.953133 4900 scope.go:117] "RemoveContainer" containerID="a9f4af34055e2f5c51f621081e1b4146f6a4806f7186d24561b906becbbe4c8e" Dec 02 14:13:58 crc kubenswrapper[4900]: I1202 14:13:58.976932 4900 scope.go:117] "RemoveContainer" containerID="f42e2dec24b72332041c0590468f513f2e9b290b1de836a0a22d3dd19494f14f" Dec 02 14:13:59 crc kubenswrapper[4900]: I1202 14:13:59.018001 4900 scope.go:117] "RemoveContainer" containerID="bb84c573856baa8e5ade2df9297e29c43751017e92787104acb31419fd9eb3d4" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.170625 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj"] Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171607 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-updater" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171630 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-updater" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171685 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171700 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171717 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-reaper" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171730 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-reaper" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171752 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171764 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171786 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="registry-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171799 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="registry-server" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171818 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171830 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171850 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171862 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-server" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171879 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171891 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171911 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171923 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171940 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server-init" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171952 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server-init" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171969 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.171982 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.171999 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-updater" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172011 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-updater" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172032 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="extract-content" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172044 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="extract-content" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172059 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172072 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-server" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172089 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172102 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172116 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172128 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172152 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="rsync" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172163 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="rsync" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172179 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-expirer" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172191 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-expirer" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172210 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="swift-recon-cron" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172222 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="swift-recon-cron" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172238 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172251 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-server" Dec 02 14:15:00 crc kubenswrapper[4900]: E1202 14:15:00.172277 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="extract-utilities" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172290 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="extract-utilities" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172530 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172551 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172576 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172597 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-updater" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172611 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-updater" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172630 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172687 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-reaper" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172706 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="swift-recon-cron" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172725 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172741 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172762 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovsdb-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172779 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172798 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="rsync" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172810 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="container-replicator" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172828 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="account-auditor" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172849 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be36eb4-f760-4583-aaf9-104f96592096" containerName="registry-server" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172868 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="305da939-8e7b-4fce-95f9-95d90218a1f0" containerName="object-expirer" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.172885 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79247d6-28ab-4234-a191-8799418aa3ea" containerName="ovs-vswitchd" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.173721 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.175734 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.178137 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.194291 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj"] Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.242904 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef81ba3b-7225-4356-bc06-bd331da5c7be-config-volume\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.242996 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef81ba3b-7225-4356-bc06-bd331da5c7be-secret-volume\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.243180 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqnq\" (UniqueName: \"kubernetes.io/projected/ef81ba3b-7225-4356-bc06-bd331da5c7be-kube-api-access-xkqnq\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.344302 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef81ba3b-7225-4356-bc06-bd331da5c7be-secret-volume\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.344427 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqnq\" (UniqueName: \"kubernetes.io/projected/ef81ba3b-7225-4356-bc06-bd331da5c7be-kube-api-access-xkqnq\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.344568 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef81ba3b-7225-4356-bc06-bd331da5c7be-config-volume\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.346575 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef81ba3b-7225-4356-bc06-bd331da5c7be-config-volume\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.355121 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef81ba3b-7225-4356-bc06-bd331da5c7be-secret-volume\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.374988 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqnq\" (UniqueName: \"kubernetes.io/projected/ef81ba3b-7225-4356-bc06-bd331da5c7be-kube-api-access-xkqnq\") pod \"collect-profiles-29411415-pwscj\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:00 crc kubenswrapper[4900]: I1202 14:15:00.509344 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:01 crc kubenswrapper[4900]: I1202 14:15:01.014776 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj"] Dec 02 14:15:01 crc kubenswrapper[4900]: I1202 14:15:01.909307 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" event={"ID":"ef81ba3b-7225-4356-bc06-bd331da5c7be","Type":"ContainerStarted","Data":"0d4d680e9af109ce282353ea140d6fee77441bc23989f17fc55f9103cdb5c551"} Dec 02 14:15:01 crc kubenswrapper[4900]: I1202 14:15:01.909969 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" event={"ID":"ef81ba3b-7225-4356-bc06-bd331da5c7be","Type":"ContainerStarted","Data":"437e6a724744316e0a440d2c1e25630347d14ed7a9dd6aa88c3ee775822fc521"} Dec 02 14:15:02 crc kubenswrapper[4900]: I1202 14:15:02.923640 4900 generic.go:334] "Generic (PLEG): container finished" podID="ef81ba3b-7225-4356-bc06-bd331da5c7be" containerID="0d4d680e9af109ce282353ea140d6fee77441bc23989f17fc55f9103cdb5c551" exitCode=0 Dec 02 14:15:02 crc kubenswrapper[4900]: I1202 14:15:02.923752 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" event={"ID":"ef81ba3b-7225-4356-bc06-bd331da5c7be","Type":"ContainerDied","Data":"0d4d680e9af109ce282353ea140d6fee77441bc23989f17fc55f9103cdb5c551"} Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.279741 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.319501 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqnq\" (UniqueName: \"kubernetes.io/projected/ef81ba3b-7225-4356-bc06-bd331da5c7be-kube-api-access-xkqnq\") pod \"ef81ba3b-7225-4356-bc06-bd331da5c7be\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.319572 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef81ba3b-7225-4356-bc06-bd331da5c7be-config-volume\") pod \"ef81ba3b-7225-4356-bc06-bd331da5c7be\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.319690 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef81ba3b-7225-4356-bc06-bd331da5c7be-secret-volume\") pod \"ef81ba3b-7225-4356-bc06-bd331da5c7be\" (UID: \"ef81ba3b-7225-4356-bc06-bd331da5c7be\") " Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.320359 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef81ba3b-7225-4356-bc06-bd331da5c7be-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef81ba3b-7225-4356-bc06-bd331da5c7be" (UID: "ef81ba3b-7225-4356-bc06-bd331da5c7be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.346109 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef81ba3b-7225-4356-bc06-bd331da5c7be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef81ba3b-7225-4356-bc06-bd331da5c7be" (UID: "ef81ba3b-7225-4356-bc06-bd331da5c7be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.346153 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef81ba3b-7225-4356-bc06-bd331da5c7be-kube-api-access-xkqnq" (OuterVolumeSpecName: "kube-api-access-xkqnq") pod "ef81ba3b-7225-4356-bc06-bd331da5c7be" (UID: "ef81ba3b-7225-4356-bc06-bd331da5c7be"). InnerVolumeSpecName "kube-api-access-xkqnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.421672 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef81ba3b-7225-4356-bc06-bd331da5c7be-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.421706 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqnq\" (UniqueName: \"kubernetes.io/projected/ef81ba3b-7225-4356-bc06-bd331da5c7be-kube-api-access-xkqnq\") on node \"crc\" DevicePath \"\"" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.421723 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef81ba3b-7225-4356-bc06-bd331da5c7be-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.946126 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" event={"ID":"ef81ba3b-7225-4356-bc06-bd331da5c7be","Type":"ContainerDied","Data":"437e6a724744316e0a440d2c1e25630347d14ed7a9dd6aa88c3ee775822fc521"} Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.946180 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="437e6a724744316e0a440d2c1e25630347d14ed7a9dd6aa88c3ee775822fc521" Dec 02 14:15:04 crc kubenswrapper[4900]: I1202 14:15:04.946231 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj" Dec 02 14:15:45 crc kubenswrapper[4900]: I1202 14:15:45.117090 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:15:45 crc kubenswrapper[4900]: I1202 14:15:45.117856 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:16:15 crc kubenswrapper[4900]: I1202 14:16:15.116889 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:16:15 crc kubenswrapper[4900]: I1202 14:16:15.117612 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.652327 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4z7b"] Dec 02 14:16:24 crc kubenswrapper[4900]: E1202 14:16:24.653223 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef81ba3b-7225-4356-bc06-bd331da5c7be" containerName="collect-profiles" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.653239 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef81ba3b-7225-4356-bc06-bd331da5c7be" containerName="collect-profiles" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.653412 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef81ba3b-7225-4356-bc06-bd331da5c7be" containerName="collect-profiles" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.654658 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.665615 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4z7b"] Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.846205 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twbm7\" (UniqueName: \"kubernetes.io/projected/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-kube-api-access-twbm7\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.846502 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-catalog-content\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.846585 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-utilities\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.947791 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-catalog-content\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.948019 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-utilities\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.948226 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twbm7\" (UniqueName: \"kubernetes.io/projected/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-kube-api-access-twbm7\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.948337 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-catalog-content\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.948987 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-utilities\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.979170 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twbm7\" (UniqueName: \"kubernetes.io/projected/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-kube-api-access-twbm7\") pod \"redhat-operators-f4z7b\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:24 crc kubenswrapper[4900]: I1202 14:16:24.988730 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:25 crc kubenswrapper[4900]: I1202 14:16:25.418424 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4z7b"] Dec 02 14:16:25 crc kubenswrapper[4900]: I1202 14:16:25.737212 4900 generic.go:334] "Generic (PLEG): container finished" podID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerID="e44c03c59779d7e816f8785247fde5950df43bdf75ebc5af4270f4b165b92929" exitCode=0 Dec 02 14:16:25 crc kubenswrapper[4900]: I1202 14:16:25.737284 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4z7b" event={"ID":"a23e7f43-5825-4327-85ee-e8bdc9b43cb2","Type":"ContainerDied","Data":"e44c03c59779d7e816f8785247fde5950df43bdf75ebc5af4270f4b165b92929"} Dec 02 14:16:25 crc kubenswrapper[4900]: I1202 14:16:25.737574 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4z7b" event={"ID":"a23e7f43-5825-4327-85ee-e8bdc9b43cb2","Type":"ContainerStarted","Data":"d64a1532a311762947a2fddd846a5935f8b94500708ad3496f54bff155684de5"} Dec 02 14:16:25 crc kubenswrapper[4900]: I1202 14:16:25.738993 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:16:27 crc kubenswrapper[4900]: I1202 14:16:27.755896 4900 generic.go:334] "Generic (PLEG): container finished" podID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerID="03603980e09814f4d5996b29c9f57902d602fe7f16be59d0756ba71aaff40900" exitCode=0 Dec 02 14:16:27 crc kubenswrapper[4900]: I1202 14:16:27.755949 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4z7b" event={"ID":"a23e7f43-5825-4327-85ee-e8bdc9b43cb2","Type":"ContainerDied","Data":"03603980e09814f4d5996b29c9f57902d602fe7f16be59d0756ba71aaff40900"} Dec 02 14:16:28 crc kubenswrapper[4900]: I1202 14:16:28.766285 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4z7b" event={"ID":"a23e7f43-5825-4327-85ee-e8bdc9b43cb2","Type":"ContainerStarted","Data":"77a9bc74c634efda55f5995725f7a97b350b1dba138c24bf39b1fc5eca567d45"} Dec 02 14:16:28 crc kubenswrapper[4900]: I1202 14:16:28.786487 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4z7b" podStartSLOduration=2.256751154 podStartE2EDuration="4.78647017s" podCreationTimestamp="2025-12-02 14:16:24 +0000 UTC" firstStartedPulling="2025-12-02 14:16:25.73869145 +0000 UTC m=+2031.154505301" lastFinishedPulling="2025-12-02 14:16:28.268410456 +0000 UTC m=+2033.684224317" observedRunningTime="2025-12-02 14:16:28.784540146 +0000 UTC m=+2034.200353997" watchObservedRunningTime="2025-12-02 14:16:28.78647017 +0000 UTC m=+2034.202284021" Dec 02 14:16:34 crc kubenswrapper[4900]: I1202 14:16:34.988946 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:34 crc kubenswrapper[4900]: I1202 14:16:34.989254 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:35 crc kubenswrapper[4900]: I1202 14:16:35.061491 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:35 crc kubenswrapper[4900]: I1202 14:16:35.138274 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:35 crc kubenswrapper[4900]: I1202 14:16:35.308803 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4z7b"] Dec 02 14:16:37 crc kubenswrapper[4900]: I1202 14:16:37.033496 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4z7b" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="registry-server" containerID="cri-o://77a9bc74c634efda55f5995725f7a97b350b1dba138c24bf39b1fc5eca567d45" gracePeriod=2 Dec 02 14:16:38 crc kubenswrapper[4900]: I1202 14:16:38.048421 4900 generic.go:334] "Generic (PLEG): container finished" podID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerID="77a9bc74c634efda55f5995725f7a97b350b1dba138c24bf39b1fc5eca567d45" exitCode=0 Dec 02 14:16:38 crc kubenswrapper[4900]: I1202 14:16:38.048502 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4z7b" event={"ID":"a23e7f43-5825-4327-85ee-e8bdc9b43cb2","Type":"ContainerDied","Data":"77a9bc74c634efda55f5995725f7a97b350b1dba138c24bf39b1fc5eca567d45"} Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.560163 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.656819 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-utilities\") pod \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.657036 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-catalog-content\") pod \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.657125 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twbm7\" (UniqueName: \"kubernetes.io/projected/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-kube-api-access-twbm7\") pod \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\" (UID: \"a23e7f43-5825-4327-85ee-e8bdc9b43cb2\") " Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.657676 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-utilities" (OuterVolumeSpecName: "utilities") pod "a23e7f43-5825-4327-85ee-e8bdc9b43cb2" (UID: "a23e7f43-5825-4327-85ee-e8bdc9b43cb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.664865 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-kube-api-access-twbm7" (OuterVolumeSpecName: "kube-api-access-twbm7") pod "a23e7f43-5825-4327-85ee-e8bdc9b43cb2" (UID: "a23e7f43-5825-4327-85ee-e8bdc9b43cb2"). InnerVolumeSpecName "kube-api-access-twbm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.759515 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twbm7\" (UniqueName: \"kubernetes.io/projected/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-kube-api-access-twbm7\") on node \"crc\" DevicePath \"\"" Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.759620 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.769126 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a23e7f43-5825-4327-85ee-e8bdc9b43cb2" (UID: "a23e7f43-5825-4327-85ee-e8bdc9b43cb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:16:39 crc kubenswrapper[4900]: I1202 14:16:39.861270 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e7f43-5825-4327-85ee-e8bdc9b43cb2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.068141 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4z7b" event={"ID":"a23e7f43-5825-4327-85ee-e8bdc9b43cb2","Type":"ContainerDied","Data":"d64a1532a311762947a2fddd846a5935f8b94500708ad3496f54bff155684de5"} Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.068229 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4z7b" Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.068565 4900 scope.go:117] "RemoveContainer" containerID="77a9bc74c634efda55f5995725f7a97b350b1dba138c24bf39b1fc5eca567d45" Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.090594 4900 scope.go:117] "RemoveContainer" containerID="03603980e09814f4d5996b29c9f57902d602fe7f16be59d0756ba71aaff40900" Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.109952 4900 scope.go:117] "RemoveContainer" containerID="e44c03c59779d7e816f8785247fde5950df43bdf75ebc5af4270f4b165b92929" Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.129555 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4z7b"] Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.137915 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4z7b"] Dec 02 14:16:40 crc kubenswrapper[4900]: I1202 14:16:40.923621 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" path="/var/lib/kubelet/pods/a23e7f43-5825-4327-85ee-e8bdc9b43cb2/volumes" Dec 02 14:16:45 crc kubenswrapper[4900]: I1202 14:16:45.116682 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:16:45 crc kubenswrapper[4900]: I1202 14:16:45.117031 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:16:45 crc kubenswrapper[4900]: I1202 14:16:45.117083 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:16:45 crc kubenswrapper[4900]: I1202 14:16:45.117856 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94687312b39445e3e3773c7f46fcd531902e8d2a0dd075059241741de4331599"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:16:45 crc kubenswrapper[4900]: I1202 14:16:45.117927 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://94687312b39445e3e3773c7f46fcd531902e8d2a0dd075059241741de4331599" gracePeriod=600 Dec 02 14:16:46 crc kubenswrapper[4900]: I1202 14:16:46.119787 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="94687312b39445e3e3773c7f46fcd531902e8d2a0dd075059241741de4331599" exitCode=0 Dec 02 14:16:46 crc kubenswrapper[4900]: I1202 14:16:46.119855 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"94687312b39445e3e3773c7f46fcd531902e8d2a0dd075059241741de4331599"} Dec 02 14:16:46 crc kubenswrapper[4900]: I1202 14:16:46.120290 4900 scope.go:117] "RemoveContainer" containerID="e31b879fbcecdb2d2749d0c46c63b476290ec087509c3fbde68f0eb28f4afcb1" Dec 02 14:16:47 crc kubenswrapper[4900]: I1202 14:16:47.128727 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f"} Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.807970 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsjz"] Dec 02 14:17:27 crc kubenswrapper[4900]: E1202 14:17:27.808894 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="extract-content" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.808911 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="extract-content" Dec 02 14:17:27 crc kubenswrapper[4900]: E1202 14:17:27.808931 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="extract-utilities" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.808939 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="extract-utilities" Dec 02 14:17:27 crc kubenswrapper[4900]: E1202 14:17:27.808950 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="registry-server" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.808960 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="registry-server" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.809120 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23e7f43-5825-4327-85ee-e8bdc9b43cb2" containerName="registry-server" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.810323 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.826969 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsjz"] Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.984091 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-catalog-content\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.984163 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp4t5\" (UniqueName: \"kubernetes.io/projected/b5657d07-85bc-4197-8ba9-a3ee1931a501-kube-api-access-dp4t5\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:27 crc kubenswrapper[4900]: I1202 14:17:27.984184 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-utilities\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.086020 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp4t5\" (UniqueName: \"kubernetes.io/projected/b5657d07-85bc-4197-8ba9-a3ee1931a501-kube-api-access-dp4t5\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.086082 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-utilities\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.086186 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-catalog-content\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.086708 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-catalog-content\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.086753 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-utilities\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.106256 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp4t5\" (UniqueName: \"kubernetes.io/projected/b5657d07-85bc-4197-8ba9-a3ee1931a501-kube-api-access-dp4t5\") pod \"redhat-marketplace-bzsjz\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.130360 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:28 crc kubenswrapper[4900]: I1202 14:17:28.573931 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsjz"] Dec 02 14:17:29 crc kubenswrapper[4900]: I1202 14:17:29.522120 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerID="48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436" exitCode=0 Dec 02 14:17:29 crc kubenswrapper[4900]: I1202 14:17:29.522205 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsjz" event={"ID":"b5657d07-85bc-4197-8ba9-a3ee1931a501","Type":"ContainerDied","Data":"48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436"} Dec 02 14:17:29 crc kubenswrapper[4900]: I1202 14:17:29.522522 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsjz" event={"ID":"b5657d07-85bc-4197-8ba9-a3ee1931a501","Type":"ContainerStarted","Data":"6958c6ded976475cf278463a34ec8eb0940a1ddaa5e762498ec2700e64af81ca"} Dec 02 14:17:31 crc kubenswrapper[4900]: I1202 14:17:31.544715 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerID="85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718" exitCode=0 Dec 02 14:17:31 crc kubenswrapper[4900]: I1202 14:17:31.544838 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsjz" event={"ID":"b5657d07-85bc-4197-8ba9-a3ee1931a501","Type":"ContainerDied","Data":"85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718"} Dec 02 14:17:34 crc kubenswrapper[4900]: I1202 14:17:34.592536 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsjz" event={"ID":"b5657d07-85bc-4197-8ba9-a3ee1931a501","Type":"ContainerStarted","Data":"6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3"} Dec 02 14:17:34 crc kubenswrapper[4900]: I1202 14:17:34.623530 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bzsjz" podStartSLOduration=3.526189339 podStartE2EDuration="7.62350604s" podCreationTimestamp="2025-12-02 14:17:27 +0000 UTC" firstStartedPulling="2025-12-02 14:17:29.524556454 +0000 UTC m=+2094.940370345" lastFinishedPulling="2025-12-02 14:17:33.621873185 +0000 UTC m=+2099.037687046" observedRunningTime="2025-12-02 14:17:34.622279976 +0000 UTC m=+2100.038093867" watchObservedRunningTime="2025-12-02 14:17:34.62350604 +0000 UTC m=+2100.039319901" Dec 02 14:17:38 crc kubenswrapper[4900]: I1202 14:17:38.131314 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:38 crc kubenswrapper[4900]: I1202 14:17:38.132481 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:38 crc kubenswrapper[4900]: I1202 14:17:38.176506 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:48 crc kubenswrapper[4900]: I1202 14:17:48.188741 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:48 crc kubenswrapper[4900]: I1202 14:17:48.245691 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsjz"] Dec 02 14:17:48 crc kubenswrapper[4900]: I1202 14:17:48.713514 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bzsjz" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="registry-server" containerID="cri-o://6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3" gracePeriod=2 Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.175375 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.342204 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-catalog-content\") pod \"b5657d07-85bc-4197-8ba9-a3ee1931a501\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.342504 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-utilities\") pod \"b5657d07-85bc-4197-8ba9-a3ee1931a501\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.342541 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp4t5\" (UniqueName: \"kubernetes.io/projected/b5657d07-85bc-4197-8ba9-a3ee1931a501-kube-api-access-dp4t5\") pod \"b5657d07-85bc-4197-8ba9-a3ee1931a501\" (UID: \"b5657d07-85bc-4197-8ba9-a3ee1931a501\") " Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.344420 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-utilities" (OuterVolumeSpecName: "utilities") pod "b5657d07-85bc-4197-8ba9-a3ee1931a501" (UID: "b5657d07-85bc-4197-8ba9-a3ee1931a501"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.348892 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5657d07-85bc-4197-8ba9-a3ee1931a501-kube-api-access-dp4t5" (OuterVolumeSpecName: "kube-api-access-dp4t5") pod "b5657d07-85bc-4197-8ba9-a3ee1931a501" (UID: "b5657d07-85bc-4197-8ba9-a3ee1931a501"). InnerVolumeSpecName "kube-api-access-dp4t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.362714 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5657d07-85bc-4197-8ba9-a3ee1931a501" (UID: "b5657d07-85bc-4197-8ba9-a3ee1931a501"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.444310 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.444344 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp4t5\" (UniqueName: \"kubernetes.io/projected/b5657d07-85bc-4197-8ba9-a3ee1931a501-kube-api-access-dp4t5\") on node \"crc\" DevicePath \"\"" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.444355 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5657d07-85bc-4197-8ba9-a3ee1931a501-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.723632 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerID="6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3" exitCode=0 Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.723675 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsjz" event={"ID":"b5657d07-85bc-4197-8ba9-a3ee1931a501","Type":"ContainerDied","Data":"6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3"} Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.723724 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsjz" event={"ID":"b5657d07-85bc-4197-8ba9-a3ee1931a501","Type":"ContainerDied","Data":"6958c6ded976475cf278463a34ec8eb0940a1ddaa5e762498ec2700e64af81ca"} Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.723753 4900 scope.go:117] "RemoveContainer" containerID="6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.723781 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsjz" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.748583 4900 scope.go:117] "RemoveContainer" containerID="85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.756192 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsjz"] Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.760977 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsjz"] Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.790075 4900 scope.go:117] "RemoveContainer" containerID="48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.804949 4900 scope.go:117] "RemoveContainer" containerID="6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3" Dec 02 14:17:49 crc kubenswrapper[4900]: E1202 14:17:49.805507 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3\": container with ID starting with 6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3 not found: ID does not exist" containerID="6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.805579 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3"} err="failed to get container status \"6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3\": rpc error: code = NotFound desc = could not find container \"6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3\": container with ID starting with 6240e7dc53807332319628c293f7e92ccc5c3cdd8fa681f6318e002fdfd360b3 not found: ID does not exist" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.805629 4900 scope.go:117] "RemoveContainer" containerID="85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718" Dec 02 14:17:49 crc kubenswrapper[4900]: E1202 14:17:49.806171 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718\": container with ID starting with 85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718 not found: ID does not exist" containerID="85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.806226 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718"} err="failed to get container status \"85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718\": rpc error: code = NotFound desc = could not find container \"85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718\": container with ID starting with 85d3f861dac31cfe55fec97d31f4bf3e0eff6052bca69ffb3c0e70a6bf94c718 not found: ID does not exist" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.806246 4900 scope.go:117] "RemoveContainer" containerID="48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436" Dec 02 14:17:49 crc kubenswrapper[4900]: E1202 14:17:49.806592 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436\": container with ID starting with 48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436 not found: ID does not exist" containerID="48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436" Dec 02 14:17:49 crc kubenswrapper[4900]: I1202 14:17:49.806669 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436"} err="failed to get container status \"48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436\": rpc error: code = NotFound desc = could not find container \"48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436\": container with ID starting with 48d160391cfabe9b19af04f0477a0733c4c55e68c5ac15fce597fc88f8542436 not found: ID does not exist" Dec 02 14:17:50 crc kubenswrapper[4900]: I1202 14:17:50.927901 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" path="/var/lib/kubelet/pods/b5657d07-85bc-4197-8ba9-a3ee1931a501/volumes" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.603103 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2rwgf"] Dec 02 14:18:38 crc kubenswrapper[4900]: E1202 14:18:38.604246 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="extract-content" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.604271 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="extract-content" Dec 02 14:18:38 crc kubenswrapper[4900]: E1202 14:18:38.604300 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="registry-server" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.604314 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="registry-server" Dec 02 14:18:38 crc kubenswrapper[4900]: E1202 14:18:38.604353 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="extract-utilities" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.604366 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="extract-utilities" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.604606 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5657d07-85bc-4197-8ba9-a3ee1931a501" containerName="registry-server" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.607069 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.611055 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rwgf"] Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.717950 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-catalog-content\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.718041 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-utilities\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.718090 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp7f\" (UniqueName: \"kubernetes.io/projected/e9352a73-8857-4eb3-a61f-505fd0439f21-kube-api-access-2mp7f\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.819216 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-utilities\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.819301 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp7f\" (UniqueName: \"kubernetes.io/projected/e9352a73-8857-4eb3-a61f-505fd0439f21-kube-api-access-2mp7f\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.819379 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-catalog-content\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.820172 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-catalog-content\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.820247 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-utilities\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.861625 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp7f\" (UniqueName: \"kubernetes.io/projected/e9352a73-8857-4eb3-a61f-505fd0439f21-kube-api-access-2mp7f\") pod \"community-operators-2rwgf\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:38 crc kubenswrapper[4900]: I1202 14:18:38.976366 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:39 crc kubenswrapper[4900]: I1202 14:18:39.420719 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rwgf"] Dec 02 14:18:40 crc kubenswrapper[4900]: I1202 14:18:40.159348 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rwgf" event={"ID":"e9352a73-8857-4eb3-a61f-505fd0439f21","Type":"ContainerStarted","Data":"c258422fbf4ddfb42fd485a5dadd3546f21bf89bda05eef2407d131f057acf77"} Dec 02 14:18:43 crc kubenswrapper[4900]: I1202 14:18:43.189690 4900 generic.go:334] "Generic (PLEG): container finished" podID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerID="0a3aed69b4bd2d704e8dc76da202d0ec16417b63ee121efd38a9707242840590" exitCode=0 Dec 02 14:18:43 crc kubenswrapper[4900]: I1202 14:18:43.190028 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rwgf" event={"ID":"e9352a73-8857-4eb3-a61f-505fd0439f21","Type":"ContainerDied","Data":"0a3aed69b4bd2d704e8dc76da202d0ec16417b63ee121efd38a9707242840590"} Dec 02 14:18:51 crc kubenswrapper[4900]: I1202 14:18:51.277429 4900 generic.go:334] "Generic (PLEG): container finished" podID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerID="ad50cc5b8eac9e0ae1705a9ca601be73d88a262d9f4c83d62055cf2241328e4d" exitCode=0 Dec 02 14:18:51 crc kubenswrapper[4900]: I1202 14:18:51.277548 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rwgf" event={"ID":"e9352a73-8857-4eb3-a61f-505fd0439f21","Type":"ContainerDied","Data":"ad50cc5b8eac9e0ae1705a9ca601be73d88a262d9f4c83d62055cf2241328e4d"} Dec 02 14:18:53 crc kubenswrapper[4900]: I1202 14:18:53.298656 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rwgf" event={"ID":"e9352a73-8857-4eb3-a61f-505fd0439f21","Type":"ContainerStarted","Data":"273b39dbe75d35714f7792b26eeae4275746b8cc84c0a8a115ba0ee7513bd510"} Dec 02 14:18:53 crc kubenswrapper[4900]: I1202 14:18:53.323402 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2rwgf" podStartSLOduration=5.581954982 podStartE2EDuration="15.323384357s" podCreationTimestamp="2025-12-02 14:18:38 +0000 UTC" firstStartedPulling="2025-12-02 14:18:43.191853627 +0000 UTC m=+2168.607667508" lastFinishedPulling="2025-12-02 14:18:52.933283012 +0000 UTC m=+2178.349096883" observedRunningTime="2025-12-02 14:18:53.317144643 +0000 UTC m=+2178.732958504" watchObservedRunningTime="2025-12-02 14:18:53.323384357 +0000 UTC m=+2178.739198208" Dec 02 14:18:58 crc kubenswrapper[4900]: I1202 14:18:58.976544 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:58 crc kubenswrapper[4900]: I1202 14:18:58.977026 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:59 crc kubenswrapper[4900]: I1202 14:18:59.033906 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:59 crc kubenswrapper[4900]: I1202 14:18:59.420157 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:18:59 crc kubenswrapper[4900]: I1202 14:18:59.494199 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rwgf"] Dec 02 14:19:01 crc kubenswrapper[4900]: I1202 14:19:01.373765 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2rwgf" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="registry-server" containerID="cri-o://273b39dbe75d35714f7792b26eeae4275746b8cc84c0a8a115ba0ee7513bd510" gracePeriod=2 Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.409322 4900 generic.go:334] "Generic (PLEG): container finished" podID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerID="273b39dbe75d35714f7792b26eeae4275746b8cc84c0a8a115ba0ee7513bd510" exitCode=0 Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.409445 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rwgf" event={"ID":"e9352a73-8857-4eb3-a61f-505fd0439f21","Type":"ContainerDied","Data":"273b39dbe75d35714f7792b26eeae4275746b8cc84c0a8a115ba0ee7513bd510"} Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.578350 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.687213 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-catalog-content\") pod \"e9352a73-8857-4eb3-a61f-505fd0439f21\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.687377 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-utilities\") pod \"e9352a73-8857-4eb3-a61f-505fd0439f21\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.687464 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp7f\" (UniqueName: \"kubernetes.io/projected/e9352a73-8857-4eb3-a61f-505fd0439f21-kube-api-access-2mp7f\") pod \"e9352a73-8857-4eb3-a61f-505fd0439f21\" (UID: \"e9352a73-8857-4eb3-a61f-505fd0439f21\") " Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.688842 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-utilities" (OuterVolumeSpecName: "utilities") pod "e9352a73-8857-4eb3-a61f-505fd0439f21" (UID: "e9352a73-8857-4eb3-a61f-505fd0439f21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.695198 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9352a73-8857-4eb3-a61f-505fd0439f21-kube-api-access-2mp7f" (OuterVolumeSpecName: "kube-api-access-2mp7f") pod "e9352a73-8857-4eb3-a61f-505fd0439f21" (UID: "e9352a73-8857-4eb3-a61f-505fd0439f21"). InnerVolumeSpecName "kube-api-access-2mp7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.762066 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9352a73-8857-4eb3-a61f-505fd0439f21" (UID: "e9352a73-8857-4eb3-a61f-505fd0439f21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.789517 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.789591 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mp7f\" (UniqueName: \"kubernetes.io/projected/e9352a73-8857-4eb3-a61f-505fd0439f21-kube-api-access-2mp7f\") on node \"crc\" DevicePath \"\"" Dec 02 14:19:05 crc kubenswrapper[4900]: I1202 14:19:05.789602 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9352a73-8857-4eb3-a61f-505fd0439f21-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.421017 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rwgf" event={"ID":"e9352a73-8857-4eb3-a61f-505fd0439f21","Type":"ContainerDied","Data":"c258422fbf4ddfb42fd485a5dadd3546f21bf89bda05eef2407d131f057acf77"} Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.421079 4900 scope.go:117] "RemoveContainer" containerID="273b39dbe75d35714f7792b26eeae4275746b8cc84c0a8a115ba0ee7513bd510" Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.421123 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rwgf" Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.461178 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rwgf"] Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.462053 4900 scope.go:117] "RemoveContainer" containerID="ad50cc5b8eac9e0ae1705a9ca601be73d88a262d9f4c83d62055cf2241328e4d" Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.471671 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2rwgf"] Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.501591 4900 scope.go:117] "RemoveContainer" containerID="0a3aed69b4bd2d704e8dc76da202d0ec16417b63ee121efd38a9707242840590" Dec 02 14:19:06 crc kubenswrapper[4900]: I1202 14:19:06.920903 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" path="/var/lib/kubelet/pods/e9352a73-8857-4eb3-a61f-505fd0439f21/volumes" Dec 02 14:19:15 crc kubenswrapper[4900]: I1202 14:19:15.116784 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:19:15 crc kubenswrapper[4900]: I1202 14:19:15.117426 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:19:45 crc kubenswrapper[4900]: I1202 14:19:45.116488 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:19:45 crc kubenswrapper[4900]: I1202 14:19:45.117292 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:20:15 crc kubenswrapper[4900]: I1202 14:20:15.117266 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:20:15 crc kubenswrapper[4900]: I1202 14:20:15.118207 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:20:15 crc kubenswrapper[4900]: I1202 14:20:15.118270 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:20:15 crc kubenswrapper[4900]: I1202 14:20:15.119299 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:20:15 crc kubenswrapper[4900]: I1202 14:20:15.119376 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" gracePeriod=600 Dec 02 14:20:15 crc kubenswrapper[4900]: E1202 14:20:15.268701 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:20:16 crc kubenswrapper[4900]: I1202 14:20:16.064299 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" exitCode=0 Dec 02 14:20:16 crc kubenswrapper[4900]: I1202 14:20:16.064382 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f"} Dec 02 14:20:16 crc kubenswrapper[4900]: I1202 14:20:16.064713 4900 scope.go:117] "RemoveContainer" containerID="94687312b39445e3e3773c7f46fcd531902e8d2a0dd075059241741de4331599" Dec 02 14:20:16 crc kubenswrapper[4900]: I1202 14:20:16.065891 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:20:16 crc kubenswrapper[4900]: E1202 14:20:16.066235 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:20:26 crc kubenswrapper[4900]: I1202 14:20:26.910446 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:20:26 crc kubenswrapper[4900]: E1202 14:20:26.911163 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:20:38 crc kubenswrapper[4900]: I1202 14:20:38.910815 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:20:38 crc kubenswrapper[4900]: E1202 14:20:38.911949 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:20:53 crc kubenswrapper[4900]: I1202 14:20:53.910950 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:20:53 crc kubenswrapper[4900]: E1202 14:20:53.912002 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:21:04 crc kubenswrapper[4900]: I1202 14:21:04.918080 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:21:04 crc kubenswrapper[4900]: E1202 14:21:04.919000 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:21:15 crc kubenswrapper[4900]: I1202 14:21:15.910399 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:21:15 crc kubenswrapper[4900]: E1202 14:21:15.911384 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.947814 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrzqb"] Dec 02 14:21:23 crc kubenswrapper[4900]: E1202 14:21:23.948711 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="extract-content" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.948728 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="extract-content" Dec 02 14:21:23 crc kubenswrapper[4900]: E1202 14:21:23.948759 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="registry-server" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.948766 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="registry-server" Dec 02 14:21:23 crc kubenswrapper[4900]: E1202 14:21:23.948783 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="extract-utilities" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.948790 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="extract-utilities" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.948923 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9352a73-8857-4eb3-a61f-505fd0439f21" containerName="registry-server" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.950198 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:23 crc kubenswrapper[4900]: I1202 14:21:23.973468 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrzqb"] Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.118559 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqlb\" (UniqueName: \"kubernetes.io/projected/0a2bff94-78c1-492c-83d0-7b85fd07cc09-kube-api-access-ppqlb\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.118781 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-utilities\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.118849 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-catalog-content\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.219454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-utilities\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.219529 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-catalog-content\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.219595 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqlb\" (UniqueName: \"kubernetes.io/projected/0a2bff94-78c1-492c-83d0-7b85fd07cc09-kube-api-access-ppqlb\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.220200 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-utilities\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.220398 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-catalog-content\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.244387 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqlb\" (UniqueName: \"kubernetes.io/projected/0a2bff94-78c1-492c-83d0-7b85fd07cc09-kube-api-access-ppqlb\") pod \"certified-operators-mrzqb\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.269440 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:24 crc kubenswrapper[4900]: I1202 14:21:24.702623 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrzqb"] Dec 02 14:21:25 crc kubenswrapper[4900]: I1202 14:21:25.630778 4900 generic.go:334] "Generic (PLEG): container finished" podID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerID="0c178d08f412a0a65de87db41a46f78c3caedab475b5ff5d9e9e618ac2db5cc0" exitCode=0 Dec 02 14:21:25 crc kubenswrapper[4900]: I1202 14:21:25.630818 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrzqb" event={"ID":"0a2bff94-78c1-492c-83d0-7b85fd07cc09","Type":"ContainerDied","Data":"0c178d08f412a0a65de87db41a46f78c3caedab475b5ff5d9e9e618ac2db5cc0"} Dec 02 14:21:25 crc kubenswrapper[4900]: I1202 14:21:25.630844 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrzqb" event={"ID":"0a2bff94-78c1-492c-83d0-7b85fd07cc09","Type":"ContainerStarted","Data":"d9a1d07ef1741ac0b96bb8ae2948d25e718b439f3b69f672e3590a7071ec53a3"} Dec 02 14:21:26 crc kubenswrapper[4900]: I1202 14:21:26.909537 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:21:26 crc kubenswrapper[4900]: E1202 14:21:26.910056 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:21:27 crc kubenswrapper[4900]: I1202 14:21:27.644825 4900 generic.go:334] "Generic (PLEG): container finished" podID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerID="433a860a452249549f696e2fb9170c90a83d77bea755067fa1560bcf060c95b1" exitCode=0 Dec 02 14:21:27 crc kubenswrapper[4900]: I1202 14:21:27.644895 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrzqb" event={"ID":"0a2bff94-78c1-492c-83d0-7b85fd07cc09","Type":"ContainerDied","Data":"433a860a452249549f696e2fb9170c90a83d77bea755067fa1560bcf060c95b1"} Dec 02 14:21:27 crc kubenswrapper[4900]: I1202 14:21:27.647201 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:21:28 crc kubenswrapper[4900]: I1202 14:21:28.656041 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrzqb" event={"ID":"0a2bff94-78c1-492c-83d0-7b85fd07cc09","Type":"ContainerStarted","Data":"a3bf3d5bf2184bad93dbb402a2b4090ce83b8c5d7d41356f19a9f44155b3c531"} Dec 02 14:21:28 crc kubenswrapper[4900]: I1202 14:21:28.678031 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrzqb" podStartSLOduration=3.087505011 podStartE2EDuration="5.678010483s" podCreationTimestamp="2025-12-02 14:21:23 +0000 UTC" firstStartedPulling="2025-12-02 14:21:25.633891987 +0000 UTC m=+2331.049705838" lastFinishedPulling="2025-12-02 14:21:28.224397459 +0000 UTC m=+2333.640211310" observedRunningTime="2025-12-02 14:21:28.67681421 +0000 UTC m=+2334.092628071" watchObservedRunningTime="2025-12-02 14:21:28.678010483 +0000 UTC m=+2334.093824334" Dec 02 14:21:34 crc kubenswrapper[4900]: I1202 14:21:34.270028 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:34 crc kubenswrapper[4900]: I1202 14:21:34.270502 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:34 crc kubenswrapper[4900]: I1202 14:21:34.317686 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:34 crc kubenswrapper[4900]: I1202 14:21:34.743267 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:37 crc kubenswrapper[4900]: I1202 14:21:37.543850 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrzqb"] Dec 02 14:21:37 crc kubenswrapper[4900]: I1202 14:21:37.544783 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrzqb" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="registry-server" containerID="cri-o://a3bf3d5bf2184bad93dbb402a2b4090ce83b8c5d7d41356f19a9f44155b3c531" gracePeriod=2 Dec 02 14:21:38 crc kubenswrapper[4900]: I1202 14:21:38.910611 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:21:38 crc kubenswrapper[4900]: E1202 14:21:38.911102 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.761594 4900 generic.go:334] "Generic (PLEG): container finished" podID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerID="a3bf3d5bf2184bad93dbb402a2b4090ce83b8c5d7d41356f19a9f44155b3c531" exitCode=0 Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.761680 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrzqb" event={"ID":"0a2bff94-78c1-492c-83d0-7b85fd07cc09","Type":"ContainerDied","Data":"a3bf3d5bf2184bad93dbb402a2b4090ce83b8c5d7d41356f19a9f44155b3c531"} Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.896401 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.921050 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-utilities\") pod \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.921135 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-catalog-content\") pod \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.921254 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqlb\" (UniqueName: \"kubernetes.io/projected/0a2bff94-78c1-492c-83d0-7b85fd07cc09-kube-api-access-ppqlb\") pod \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\" (UID: \"0a2bff94-78c1-492c-83d0-7b85fd07cc09\") " Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.922530 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-utilities" (OuterVolumeSpecName: "utilities") pod "0a2bff94-78c1-492c-83d0-7b85fd07cc09" (UID: "0a2bff94-78c1-492c-83d0-7b85fd07cc09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.934894 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2bff94-78c1-492c-83d0-7b85fd07cc09-kube-api-access-ppqlb" (OuterVolumeSpecName: "kube-api-access-ppqlb") pod "0a2bff94-78c1-492c-83d0-7b85fd07cc09" (UID: "0a2bff94-78c1-492c-83d0-7b85fd07cc09"). InnerVolumeSpecName "kube-api-access-ppqlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:21:41 crc kubenswrapper[4900]: I1202 14:21:41.970516 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a2bff94-78c1-492c-83d0-7b85fd07cc09" (UID: "0a2bff94-78c1-492c-83d0-7b85fd07cc09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.022393 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqlb\" (UniqueName: \"kubernetes.io/projected/0a2bff94-78c1-492c-83d0-7b85fd07cc09-kube-api-access-ppqlb\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.022429 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.022439 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2bff94-78c1-492c-83d0-7b85fd07cc09-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.772097 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrzqb" event={"ID":"0a2bff94-78c1-492c-83d0-7b85fd07cc09","Type":"ContainerDied","Data":"d9a1d07ef1741ac0b96bb8ae2948d25e718b439f3b69f672e3590a7071ec53a3"} Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.772183 4900 scope.go:117] "RemoveContainer" containerID="a3bf3d5bf2184bad93dbb402a2b4090ce83b8c5d7d41356f19a9f44155b3c531" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.772184 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrzqb" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.796765 4900 scope.go:117] "RemoveContainer" containerID="433a860a452249549f696e2fb9170c90a83d77bea755067fa1560bcf060c95b1" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.815702 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrzqb"] Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.822469 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mrzqb"] Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.833088 4900 scope.go:117] "RemoveContainer" containerID="0c178d08f412a0a65de87db41a46f78c3caedab475b5ff5d9e9e618ac2db5cc0" Dec 02 14:21:42 crc kubenswrapper[4900]: I1202 14:21:42.918293 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" path="/var/lib/kubelet/pods/0a2bff94-78c1-492c-83d0-7b85fd07cc09/volumes" Dec 02 14:21:51 crc kubenswrapper[4900]: I1202 14:21:51.911060 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:21:51 crc kubenswrapper[4900]: E1202 14:21:51.913056 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:22:06 crc kubenswrapper[4900]: I1202 14:22:06.910403 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:22:06 crc kubenswrapper[4900]: E1202 14:22:06.911127 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:22:19 crc kubenswrapper[4900]: I1202 14:22:19.910107 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:22:19 crc kubenswrapper[4900]: E1202 14:22:19.912219 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:22:32 crc kubenswrapper[4900]: I1202 14:22:32.958966 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:22:32 crc kubenswrapper[4900]: E1202 14:22:32.959765 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:22:47 crc kubenswrapper[4900]: I1202 14:22:47.910147 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:22:48 crc kubenswrapper[4900]: E1202 14:22:47.911167 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:22:59 crc kubenswrapper[4900]: I1202 14:22:59.909997 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:22:59 crc kubenswrapper[4900]: E1202 14:22:59.910941 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:23:10 crc kubenswrapper[4900]: I1202 14:23:10.910754 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:23:10 crc kubenswrapper[4900]: E1202 14:23:10.912310 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:23:21 crc kubenswrapper[4900]: I1202 14:23:21.911087 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:23:21 crc kubenswrapper[4900]: E1202 14:23:21.911783 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:23:33 crc kubenswrapper[4900]: I1202 14:23:33.910054 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:23:33 crc kubenswrapper[4900]: E1202 14:23:33.911023 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:23:45 crc kubenswrapper[4900]: I1202 14:23:45.910672 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:23:45 crc kubenswrapper[4900]: E1202 14:23:45.911483 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:23:58 crc kubenswrapper[4900]: I1202 14:23:58.910396 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:23:58 crc kubenswrapper[4900]: E1202 14:23:58.911055 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:24:09 crc kubenswrapper[4900]: I1202 14:24:09.909931 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:24:09 crc kubenswrapper[4900]: E1202 14:24:09.911256 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:24:20 crc kubenswrapper[4900]: I1202 14:24:20.911033 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:24:20 crc kubenswrapper[4900]: E1202 14:24:20.914893 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:24:34 crc kubenswrapper[4900]: I1202 14:24:34.915206 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:24:34 crc kubenswrapper[4900]: E1202 14:24:34.916028 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:24:48 crc kubenswrapper[4900]: I1202 14:24:48.910169 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:24:48 crc kubenswrapper[4900]: E1202 14:24:48.910883 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:24:59 crc kubenswrapper[4900]: I1202 14:24:59.910390 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:24:59 crc kubenswrapper[4900]: E1202 14:24:59.911424 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:25:11 crc kubenswrapper[4900]: I1202 14:25:11.910045 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:25:11 crc kubenswrapper[4900]: E1202 14:25:11.911894 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:25:25 crc kubenswrapper[4900]: I1202 14:25:25.910533 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:25:26 crc kubenswrapper[4900]: I1202 14:25:26.872512 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"25b11fd4cf71fcd3c2d6b68c2b4317e68b7b5ccf1902e8341fedbdc2b2ddb0f7"} Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.687130 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kwvm"] Dec 02 14:27:31 crc kubenswrapper[4900]: E1202 14:27:31.689100 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="registry-server" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.689188 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="registry-server" Dec 02 14:27:31 crc kubenswrapper[4900]: E1202 14:27:31.689267 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="extract-content" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.689326 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="extract-content" Dec 02 14:27:31 crc kubenswrapper[4900]: E1202 14:27:31.689389 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="extract-utilities" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.689447 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="extract-utilities" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.689687 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2bff94-78c1-492c-83d0-7b85fd07cc09" containerName="registry-server" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.690840 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.715676 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kwvm"] Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.770729 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-catalog-content\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.770795 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drjd\" (UniqueName: \"kubernetes.io/projected/c4ed4285-0d42-40bc-a881-b3bca86bcbab-kube-api-access-7drjd\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.771155 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-utilities\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.872783 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-catalog-content\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.872840 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drjd\" (UniqueName: \"kubernetes.io/projected/c4ed4285-0d42-40bc-a881-b3bca86bcbab-kube-api-access-7drjd\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.872930 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-utilities\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.873319 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-catalog-content\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.873504 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-utilities\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:31 crc kubenswrapper[4900]: I1202 14:27:31.891765 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drjd\" (UniqueName: \"kubernetes.io/projected/c4ed4285-0d42-40bc-a881-b3bca86bcbab-kube-api-access-7drjd\") pod \"redhat-operators-8kwvm\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:32 crc kubenswrapper[4900]: I1202 14:27:32.020847 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:32 crc kubenswrapper[4900]: I1202 14:27:32.457890 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kwvm"] Dec 02 14:27:33 crc kubenswrapper[4900]: I1202 14:27:33.409470 4900 generic.go:334] "Generic (PLEG): container finished" podID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerID="9af4f032f62a3449d6affdc5d9539f6cf8bd009c8a149b7284d3f286c0905c9f" exitCode=0 Dec 02 14:27:33 crc kubenswrapper[4900]: I1202 14:27:33.409549 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kwvm" event={"ID":"c4ed4285-0d42-40bc-a881-b3bca86bcbab","Type":"ContainerDied","Data":"9af4f032f62a3449d6affdc5d9539f6cf8bd009c8a149b7284d3f286c0905c9f"} Dec 02 14:27:33 crc kubenswrapper[4900]: I1202 14:27:33.409949 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kwvm" event={"ID":"c4ed4285-0d42-40bc-a881-b3bca86bcbab","Type":"ContainerStarted","Data":"ee762e9acac97e83b8c5f6394d688575f561f01df9fc5a3574be15bdaa5ab1b0"} Dec 02 14:27:33 crc kubenswrapper[4900]: I1202 14:27:33.411364 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:27:35 crc kubenswrapper[4900]: I1202 14:27:35.433548 4900 generic.go:334] "Generic (PLEG): container finished" podID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerID="df4d790e985b7f647b85b175a13d44b8c86fcc3c3d1c7931409df9fb74af3eab" exitCode=0 Dec 02 14:27:35 crc kubenswrapper[4900]: I1202 14:27:35.433742 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kwvm" event={"ID":"c4ed4285-0d42-40bc-a881-b3bca86bcbab","Type":"ContainerDied","Data":"df4d790e985b7f647b85b175a13d44b8c86fcc3c3d1c7931409df9fb74af3eab"} Dec 02 14:27:36 crc kubenswrapper[4900]: I1202 14:27:36.443154 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kwvm" event={"ID":"c4ed4285-0d42-40bc-a881-b3bca86bcbab","Type":"ContainerStarted","Data":"80546b443b65ea5cc6e52965d25813aacfea365393dea2041a82ef2f2b05886f"} Dec 02 14:27:36 crc kubenswrapper[4900]: I1202 14:27:36.468738 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kwvm" podStartSLOduration=3.00070836 podStartE2EDuration="5.468721186s" podCreationTimestamp="2025-12-02 14:27:31 +0000 UTC" firstStartedPulling="2025-12-02 14:27:33.411063875 +0000 UTC m=+2698.826877736" lastFinishedPulling="2025-12-02 14:27:35.879076711 +0000 UTC m=+2701.294890562" observedRunningTime="2025-12-02 14:27:36.461574975 +0000 UTC m=+2701.877388826" watchObservedRunningTime="2025-12-02 14:27:36.468721186 +0000 UTC m=+2701.884535037" Dec 02 14:27:42 crc kubenswrapper[4900]: I1202 14:27:42.021083 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:42 crc kubenswrapper[4900]: I1202 14:27:42.021454 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:42 crc kubenswrapper[4900]: I1202 14:27:42.087098 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:42 crc kubenswrapper[4900]: I1202 14:27:42.557903 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:42 crc kubenswrapper[4900]: I1202 14:27:42.620271 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kwvm"] Dec 02 14:27:44 crc kubenswrapper[4900]: I1202 14:27:44.515938 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8kwvm" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="registry-server" containerID="cri-o://80546b443b65ea5cc6e52965d25813aacfea365393dea2041a82ef2f2b05886f" gracePeriod=2 Dec 02 14:27:45 crc kubenswrapper[4900]: I1202 14:27:45.117251 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:27:45 crc kubenswrapper[4900]: I1202 14:27:45.117323 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.546697 4900 generic.go:334] "Generic (PLEG): container finished" podID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerID="80546b443b65ea5cc6e52965d25813aacfea365393dea2041a82ef2f2b05886f" exitCode=0 Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.546953 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kwvm" event={"ID":"c4ed4285-0d42-40bc-a881-b3bca86bcbab","Type":"ContainerDied","Data":"80546b443b65ea5cc6e52965d25813aacfea365393dea2041a82ef2f2b05886f"} Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.692485 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.811366 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-utilities\") pod \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.811508 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7drjd\" (UniqueName: \"kubernetes.io/projected/c4ed4285-0d42-40bc-a881-b3bca86bcbab-kube-api-access-7drjd\") pod \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.811535 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-catalog-content\") pod \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\" (UID: \"c4ed4285-0d42-40bc-a881-b3bca86bcbab\") " Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.812412 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-utilities" (OuterVolumeSpecName: "utilities") pod "c4ed4285-0d42-40bc-a881-b3bca86bcbab" (UID: "c4ed4285-0d42-40bc-a881-b3bca86bcbab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.819845 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ed4285-0d42-40bc-a881-b3bca86bcbab-kube-api-access-7drjd" (OuterVolumeSpecName: "kube-api-access-7drjd") pod "c4ed4285-0d42-40bc-a881-b3bca86bcbab" (UID: "c4ed4285-0d42-40bc-a881-b3bca86bcbab"). InnerVolumeSpecName "kube-api-access-7drjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.912987 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.913030 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7drjd\" (UniqueName: \"kubernetes.io/projected/c4ed4285-0d42-40bc-a881-b3bca86bcbab-kube-api-access-7drjd\") on node \"crc\" DevicePath \"\"" Dec 02 14:27:47 crc kubenswrapper[4900]: I1202 14:27:47.967072 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4ed4285-0d42-40bc-a881-b3bca86bcbab" (UID: "c4ed4285-0d42-40bc-a881-b3bca86bcbab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.015005 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed4285-0d42-40bc-a881-b3bca86bcbab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.561924 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kwvm" event={"ID":"c4ed4285-0d42-40bc-a881-b3bca86bcbab","Type":"ContainerDied","Data":"ee762e9acac97e83b8c5f6394d688575f561f01df9fc5a3574be15bdaa5ab1b0"} Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.562006 4900 scope.go:117] "RemoveContainer" containerID="80546b443b65ea5cc6e52965d25813aacfea365393dea2041a82ef2f2b05886f" Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.562036 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kwvm" Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.587663 4900 scope.go:117] "RemoveContainer" containerID="df4d790e985b7f647b85b175a13d44b8c86fcc3c3d1c7931409df9fb74af3eab" Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.607133 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kwvm"] Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.617905 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8kwvm"] Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.629687 4900 scope.go:117] "RemoveContainer" containerID="9af4f032f62a3449d6affdc5d9539f6cf8bd009c8a149b7284d3f286c0905c9f" Dec 02 14:27:48 crc kubenswrapper[4900]: I1202 14:27:48.930260 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" path="/var/lib/kubelet/pods/c4ed4285-0d42-40bc-a881-b3bca86bcbab/volumes" Dec 02 14:28:15 crc kubenswrapper[4900]: I1202 14:28:15.117383 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:28:15 crc kubenswrapper[4900]: I1202 14:28:15.118050 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.532171 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2b7b"] Dec 02 14:28:17 crc kubenswrapper[4900]: E1202 14:28:17.532751 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="extract-utilities" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.532827 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="extract-utilities" Dec 02 14:28:17 crc kubenswrapper[4900]: E1202 14:28:17.532844 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="extract-content" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.532853 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="extract-content" Dec 02 14:28:17 crc kubenswrapper[4900]: E1202 14:28:17.532887 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="registry-server" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.532896 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="registry-server" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.533073 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ed4285-0d42-40bc-a881-b3bca86bcbab" containerName="registry-server" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.535261 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.551963 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2b7b"] Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.556814 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chs7b\" (UniqueName: \"kubernetes.io/projected/6ce9425a-af69-4b36-8b1d-8d45cf642719-kube-api-access-chs7b\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.557017 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-utilities\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.557102 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-catalog-content\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.658039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-catalog-content\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.658503 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chs7b\" (UniqueName: \"kubernetes.io/projected/6ce9425a-af69-4b36-8b1d-8d45cf642719-kube-api-access-chs7b\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.658550 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-utilities\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.658787 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-catalog-content\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.659013 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-utilities\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.677065 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chs7b\" (UniqueName: \"kubernetes.io/projected/6ce9425a-af69-4b36-8b1d-8d45cf642719-kube-api-access-chs7b\") pod \"redhat-marketplace-m2b7b\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:17 crc kubenswrapper[4900]: I1202 14:28:17.866837 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:18 crc kubenswrapper[4900]: I1202 14:28:18.322249 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2b7b"] Dec 02 14:28:18 crc kubenswrapper[4900]: I1202 14:28:18.787225 4900 generic.go:334] "Generic (PLEG): container finished" podID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerID="b56fd22fa44ef6abc4c84e2889ca79c676522ea41c8b80eac4a5a540f2b8e57c" exitCode=0 Dec 02 14:28:18 crc kubenswrapper[4900]: I1202 14:28:18.787276 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2b7b" event={"ID":"6ce9425a-af69-4b36-8b1d-8d45cf642719","Type":"ContainerDied","Data":"b56fd22fa44ef6abc4c84e2889ca79c676522ea41c8b80eac4a5a540f2b8e57c"} Dec 02 14:28:18 crc kubenswrapper[4900]: I1202 14:28:18.787462 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2b7b" event={"ID":"6ce9425a-af69-4b36-8b1d-8d45cf642719","Type":"ContainerStarted","Data":"5d3663d3c88c2d582e85ed829e0c320def2e9b7bc2ad806ec5948d54f433682b"} Dec 02 14:28:20 crc kubenswrapper[4900]: E1202 14:28:20.277451 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ce9425a_af69_4b36_8b1d_8d45cf642719.slice/crio-1e559bd69af950cd646e89b0d282562bb8308b35637033bff7283dd969822f8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ce9425a_af69_4b36_8b1d_8d45cf642719.slice/crio-conmon-1e559bd69af950cd646e89b0d282562bb8308b35637033bff7283dd969822f8d.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:28:20 crc kubenswrapper[4900]: I1202 14:28:20.805868 4900 generic.go:334] "Generic (PLEG): container finished" podID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerID="1e559bd69af950cd646e89b0d282562bb8308b35637033bff7283dd969822f8d" exitCode=0 Dec 02 14:28:20 crc kubenswrapper[4900]: I1202 14:28:20.806301 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2b7b" event={"ID":"6ce9425a-af69-4b36-8b1d-8d45cf642719","Type":"ContainerDied","Data":"1e559bd69af950cd646e89b0d282562bb8308b35637033bff7283dd969822f8d"} Dec 02 14:28:21 crc kubenswrapper[4900]: I1202 14:28:21.814471 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2b7b" event={"ID":"6ce9425a-af69-4b36-8b1d-8d45cf642719","Type":"ContainerStarted","Data":"b43858bdebad483ca0392c2edaa71c0183f92d41c5e1ae612004d57014aaccbc"} Dec 02 14:28:21 crc kubenswrapper[4900]: I1202 14:28:21.838897 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2b7b" podStartSLOduration=2.280408004 podStartE2EDuration="4.838875291s" podCreationTimestamp="2025-12-02 14:28:17 +0000 UTC" firstStartedPulling="2025-12-02 14:28:18.788636426 +0000 UTC m=+2744.204450277" lastFinishedPulling="2025-12-02 14:28:21.347103713 +0000 UTC m=+2746.762917564" observedRunningTime="2025-12-02 14:28:21.834990271 +0000 UTC m=+2747.250804122" watchObservedRunningTime="2025-12-02 14:28:21.838875291 +0000 UTC m=+2747.254689132" Dec 02 14:28:27 crc kubenswrapper[4900]: I1202 14:28:27.867552 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:27 crc kubenswrapper[4900]: I1202 14:28:27.867937 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:27 crc kubenswrapper[4900]: I1202 14:28:27.912143 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:28 crc kubenswrapper[4900]: I1202 14:28:28.899789 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:28 crc kubenswrapper[4900]: I1202 14:28:28.951556 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2b7b"] Dec 02 14:28:30 crc kubenswrapper[4900]: I1202 14:28:30.875974 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2b7b" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="registry-server" containerID="cri-o://b43858bdebad483ca0392c2edaa71c0183f92d41c5e1ae612004d57014aaccbc" gracePeriod=2 Dec 02 14:28:31 crc kubenswrapper[4900]: I1202 14:28:31.889899 4900 generic.go:334] "Generic (PLEG): container finished" podID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerID="b43858bdebad483ca0392c2edaa71c0183f92d41c5e1ae612004d57014aaccbc" exitCode=0 Dec 02 14:28:31 crc kubenswrapper[4900]: I1202 14:28:31.890011 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2b7b" event={"ID":"6ce9425a-af69-4b36-8b1d-8d45cf642719","Type":"ContainerDied","Data":"b43858bdebad483ca0392c2edaa71c0183f92d41c5e1ae612004d57014aaccbc"} Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.443166 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.519492 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chs7b\" (UniqueName: \"kubernetes.io/projected/6ce9425a-af69-4b36-8b1d-8d45cf642719-kube-api-access-chs7b\") pod \"6ce9425a-af69-4b36-8b1d-8d45cf642719\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.519567 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-utilities\") pod \"6ce9425a-af69-4b36-8b1d-8d45cf642719\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.519636 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-catalog-content\") pod \"6ce9425a-af69-4b36-8b1d-8d45cf642719\" (UID: \"6ce9425a-af69-4b36-8b1d-8d45cf642719\") " Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.520875 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-utilities" (OuterVolumeSpecName: "utilities") pod "6ce9425a-af69-4b36-8b1d-8d45cf642719" (UID: "6ce9425a-af69-4b36-8b1d-8d45cf642719"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.525765 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce9425a-af69-4b36-8b1d-8d45cf642719-kube-api-access-chs7b" (OuterVolumeSpecName: "kube-api-access-chs7b") pod "6ce9425a-af69-4b36-8b1d-8d45cf642719" (UID: "6ce9425a-af69-4b36-8b1d-8d45cf642719"). InnerVolumeSpecName "kube-api-access-chs7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.539225 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ce9425a-af69-4b36-8b1d-8d45cf642719" (UID: "6ce9425a-af69-4b36-8b1d-8d45cf642719"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.621029 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.621268 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chs7b\" (UniqueName: \"kubernetes.io/projected/6ce9425a-af69-4b36-8b1d-8d45cf642719-kube-api-access-chs7b\") on node \"crc\" DevicePath \"\"" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.621278 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce9425a-af69-4b36-8b1d-8d45cf642719-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.902353 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2b7b" event={"ID":"6ce9425a-af69-4b36-8b1d-8d45cf642719","Type":"ContainerDied","Data":"5d3663d3c88c2d582e85ed829e0c320def2e9b7bc2ad806ec5948d54f433682b"} Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.902401 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2b7b" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.902416 4900 scope.go:117] "RemoveContainer" containerID="b43858bdebad483ca0392c2edaa71c0183f92d41c5e1ae612004d57014aaccbc" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.923788 4900 scope.go:117] "RemoveContainer" containerID="1e559bd69af950cd646e89b0d282562bb8308b35637033bff7283dd969822f8d" Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.932516 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2b7b"] Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.939270 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2b7b"] Dec 02 14:28:32 crc kubenswrapper[4900]: I1202 14:28:32.965621 4900 scope.go:117] "RemoveContainer" containerID="b56fd22fa44ef6abc4c84e2889ca79c676522ea41c8b80eac4a5a540f2b8e57c" Dec 02 14:28:34 crc kubenswrapper[4900]: I1202 14:28:34.922825 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" path="/var/lib/kubelet/pods/6ce9425a-af69-4b36-8b1d-8d45cf642719/volumes" Dec 02 14:28:45 crc kubenswrapper[4900]: I1202 14:28:45.116953 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:28:45 crc kubenswrapper[4900]: I1202 14:28:45.117341 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:28:45 crc kubenswrapper[4900]: I1202 14:28:45.117834 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:28:45 crc kubenswrapper[4900]: I1202 14:28:45.118681 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25b11fd4cf71fcd3c2d6b68c2b4317e68b7b5ccf1902e8341fedbdc2b2ddb0f7"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:28:45 crc kubenswrapper[4900]: I1202 14:28:45.118760 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://25b11fd4cf71fcd3c2d6b68c2b4317e68b7b5ccf1902e8341fedbdc2b2ddb0f7" gracePeriod=600 Dec 02 14:28:46 crc kubenswrapper[4900]: I1202 14:28:46.013846 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="25b11fd4cf71fcd3c2d6b68c2b4317e68b7b5ccf1902e8341fedbdc2b2ddb0f7" exitCode=0 Dec 02 14:28:46 crc kubenswrapper[4900]: I1202 14:28:46.013945 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"25b11fd4cf71fcd3c2d6b68c2b4317e68b7b5ccf1902e8341fedbdc2b2ddb0f7"} Dec 02 14:28:46 crc kubenswrapper[4900]: I1202 14:28:46.014194 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499"} Dec 02 14:28:46 crc kubenswrapper[4900]: I1202 14:28:46.014425 4900 scope.go:117] "RemoveContainer" containerID="64daaaf682d629d799ed58fa942e25e93d0f8ebd5cd5c375f0817a4738545a3f" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.831993 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 14:29:56 crc kubenswrapper[4900]: E1202 14:29:56.832819 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="registry-server" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.832836 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="registry-server" Dec 02 14:29:56 crc kubenswrapper[4900]: E1202 14:29:56.832850 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="extract-utilities" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.832860 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="extract-utilities" Dec 02 14:29:56 crc kubenswrapper[4900]: E1202 14:29:56.832871 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="extract-content" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.832879 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="extract-content" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.833039 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce9425a-af69-4b36-8b1d-8d45cf642719" containerName="registry-server" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.835362 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:56 crc kubenswrapper[4900]: I1202 14:29:56.860193 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.013625 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6794\" (UniqueName: \"kubernetes.io/projected/8d5c6da4-dfc6-43be-9124-6107cc309880-kube-api-access-x6794\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.013776 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-utilities\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.013927 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-catalog-content\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.114989 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-catalog-content\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.115104 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6794\" (UniqueName: \"kubernetes.io/projected/8d5c6da4-dfc6-43be-9124-6107cc309880-kube-api-access-x6794\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.115138 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-utilities\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.115691 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-catalog-content\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.115729 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-utilities\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.137214 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6794\" (UniqueName: \"kubernetes.io/projected/8d5c6da4-dfc6-43be-9124-6107cc309880-kube-api-access-x6794\") pod \"community-operators-7fm2c\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.167535 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.438378 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 14:29:57 crc kubenswrapper[4900]: I1202 14:29:57.579743 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fm2c" event={"ID":"8d5c6da4-dfc6-43be-9124-6107cc309880","Type":"ContainerStarted","Data":"c92ffa1b39ce97beb2f66ccae5e5e29f5bb27fd2a90ad0820aedf3e8921d6555"} Dec 02 14:29:58 crc kubenswrapper[4900]: I1202 14:29:58.587864 4900 generic.go:334] "Generic (PLEG): container finished" podID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerID="59efb499a9c16988eff6dba9a5dba78b66eb7c4c3d6ebcbe8a168195aed97ff0" exitCode=0 Dec 02 14:29:58 crc kubenswrapper[4900]: I1202 14:29:58.587937 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fm2c" event={"ID":"8d5c6da4-dfc6-43be-9124-6107cc309880","Type":"ContainerDied","Data":"59efb499a9c16988eff6dba9a5dba78b66eb7c4c3d6ebcbe8a168195aed97ff0"} Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.181383 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7"] Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.183283 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.186923 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.188020 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.197793 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7"] Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.268272 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3ea0b54-fc53-41be-914a-699e24d18400-secret-volume\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.268468 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslng\" (UniqueName: \"kubernetes.io/projected/a3ea0b54-fc53-41be-914a-699e24d18400-kube-api-access-vslng\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.268531 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ea0b54-fc53-41be-914a-699e24d18400-config-volume\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.370394 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3ea0b54-fc53-41be-914a-699e24d18400-secret-volume\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.370472 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslng\" (UniqueName: \"kubernetes.io/projected/a3ea0b54-fc53-41be-914a-699e24d18400-kube-api-access-vslng\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.370496 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ea0b54-fc53-41be-914a-699e24d18400-config-volume\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.371450 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ea0b54-fc53-41be-914a-699e24d18400-config-volume\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.385002 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3ea0b54-fc53-41be-914a-699e24d18400-secret-volume\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.392182 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslng\" (UniqueName: \"kubernetes.io/projected/a3ea0b54-fc53-41be-914a-699e24d18400-kube-api-access-vslng\") pod \"collect-profiles-29411430-gq9q7\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.519433 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:00 crc kubenswrapper[4900]: I1202 14:30:00.935251 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7"] Dec 02 14:30:00 crc kubenswrapper[4900]: W1202 14:30:00.939181 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ea0b54_fc53_41be_914a_699e24d18400.slice/crio-43ccc5becc14b76942ae3052d4c4e968a4c25797c965a6c6598ec8cfcecf7e46 WatchSource:0}: Error finding container 43ccc5becc14b76942ae3052d4c4e968a4c25797c965a6c6598ec8cfcecf7e46: Status 404 returned error can't find the container with id 43ccc5becc14b76942ae3052d4c4e968a4c25797c965a6c6598ec8cfcecf7e46 Dec 02 14:30:01 crc kubenswrapper[4900]: I1202 14:30:01.625019 4900 generic.go:334] "Generic (PLEG): container finished" podID="a3ea0b54-fc53-41be-914a-699e24d18400" containerID="ee7b3a6ef268d5351c724802c5e4dd8fa499c56013249d49a7d2c52e3fc2eec9" exitCode=0 Dec 02 14:30:01 crc kubenswrapper[4900]: I1202 14:30:01.625128 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" event={"ID":"a3ea0b54-fc53-41be-914a-699e24d18400","Type":"ContainerDied","Data":"ee7b3a6ef268d5351c724802c5e4dd8fa499c56013249d49a7d2c52e3fc2eec9"} Dec 02 14:30:01 crc kubenswrapper[4900]: I1202 14:30:01.625285 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" event={"ID":"a3ea0b54-fc53-41be-914a-699e24d18400","Type":"ContainerStarted","Data":"43ccc5becc14b76942ae3052d4c4e968a4c25797c965a6c6598ec8cfcecf7e46"} Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.595177 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.653856 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" event={"ID":"a3ea0b54-fc53-41be-914a-699e24d18400","Type":"ContainerDied","Data":"43ccc5becc14b76942ae3052d4c4e968a4c25797c965a6c6598ec8cfcecf7e46"} Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.653904 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43ccc5becc14b76942ae3052d4c4e968a4c25797c965a6c6598ec8cfcecf7e46" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.653950 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.751712 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3ea0b54-fc53-41be-914a-699e24d18400-secret-volume\") pod \"a3ea0b54-fc53-41be-914a-699e24d18400\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.751952 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ea0b54-fc53-41be-914a-699e24d18400-config-volume\") pod \"a3ea0b54-fc53-41be-914a-699e24d18400\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.752071 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vslng\" (UniqueName: \"kubernetes.io/projected/a3ea0b54-fc53-41be-914a-699e24d18400-kube-api-access-vslng\") pod \"a3ea0b54-fc53-41be-914a-699e24d18400\" (UID: \"a3ea0b54-fc53-41be-914a-699e24d18400\") " Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.753575 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ea0b54-fc53-41be-914a-699e24d18400-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3ea0b54-fc53-41be-914a-699e24d18400" (UID: "a3ea0b54-fc53-41be-914a-699e24d18400"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.757906 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ea0b54-fc53-41be-914a-699e24d18400-kube-api-access-vslng" (OuterVolumeSpecName: "kube-api-access-vslng") pod "a3ea0b54-fc53-41be-914a-699e24d18400" (UID: "a3ea0b54-fc53-41be-914a-699e24d18400"). InnerVolumeSpecName "kube-api-access-vslng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.762960 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ea0b54-fc53-41be-914a-699e24d18400-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3ea0b54-fc53-41be-914a-699e24d18400" (UID: "a3ea0b54-fc53-41be-914a-699e24d18400"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.854245 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vslng\" (UniqueName: \"kubernetes.io/projected/a3ea0b54-fc53-41be-914a-699e24d18400-kube-api-access-vslng\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.854314 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3ea0b54-fc53-41be-914a-699e24d18400-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:05 crc kubenswrapper[4900]: I1202 14:30:05.854336 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ea0b54-fc53-41be-914a-699e24d18400-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:06 crc kubenswrapper[4900]: I1202 14:30:06.689103 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p"] Dec 02 14:30:06 crc kubenswrapper[4900]: I1202 14:30:06.695172 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-l4l7p"] Dec 02 14:30:06 crc kubenswrapper[4900]: I1202 14:30:06.922838 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8694fceb-8a5b-41a0-8c8a-2dbca31557ca" path="/var/lib/kubelet/pods/8694fceb-8a5b-41a0-8c8a-2dbca31557ca/volumes" Dec 02 14:30:11 crc kubenswrapper[4900]: I1202 14:30:11.705982 4900 generic.go:334] "Generic (PLEG): container finished" podID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerID="ed0c22841b6189789c41debe553dc5f8281fde4e855b77fe65418644a6d98510" exitCode=0 Dec 02 14:30:11 crc kubenswrapper[4900]: I1202 14:30:11.706047 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fm2c" event={"ID":"8d5c6da4-dfc6-43be-9124-6107cc309880","Type":"ContainerDied","Data":"ed0c22841b6189789c41debe553dc5f8281fde4e855b77fe65418644a6d98510"} Dec 02 14:30:13 crc kubenswrapper[4900]: I1202 14:30:13.737092 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fm2c" event={"ID":"8d5c6da4-dfc6-43be-9124-6107cc309880","Type":"ContainerStarted","Data":"dd0902bad7b8d8d520b08c2f86be4a5d65742a94a1bf764f18f92d46164be823"} Dec 02 14:30:13 crc kubenswrapper[4900]: I1202 14:30:13.752896 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7fm2c" podStartSLOduration=3.714599012 podStartE2EDuration="17.752875767s" podCreationTimestamp="2025-12-02 14:29:56 +0000 UTC" firstStartedPulling="2025-12-02 14:29:58.590862207 +0000 UTC m=+2844.006676068" lastFinishedPulling="2025-12-02 14:30:12.629138972 +0000 UTC m=+2858.044952823" observedRunningTime="2025-12-02 14:30:13.751849528 +0000 UTC m=+2859.167663379" watchObservedRunningTime="2025-12-02 14:30:13.752875767 +0000 UTC m=+2859.168689618" Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.167976 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.168071 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.210100 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.808468 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.889328 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.928394 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mp6vg"] Dec 02 14:30:17 crc kubenswrapper[4900]: I1202 14:30:17.929204 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mp6vg" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="registry-server" containerID="cri-o://89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c" gracePeriod=2 Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.601903 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.772478 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-catalog-content\") pod \"cb010fa9-e439-452d-9f8f-b9882850ac8a\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.772564 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-utilities\") pod \"cb010fa9-e439-452d-9f8f-b9882850ac8a\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.772604 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk7jl\" (UniqueName: \"kubernetes.io/projected/cb010fa9-e439-452d-9f8f-b9882850ac8a-kube-api-access-bk7jl\") pod \"cb010fa9-e439-452d-9f8f-b9882850ac8a\" (UID: \"cb010fa9-e439-452d-9f8f-b9882850ac8a\") " Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.773599 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-utilities" (OuterVolumeSpecName: "utilities") pod "cb010fa9-e439-452d-9f8f-b9882850ac8a" (UID: "cb010fa9-e439-452d-9f8f-b9882850ac8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.781903 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb010fa9-e439-452d-9f8f-b9882850ac8a-kube-api-access-bk7jl" (OuterVolumeSpecName: "kube-api-access-bk7jl") pod "cb010fa9-e439-452d-9f8f-b9882850ac8a" (UID: "cb010fa9-e439-452d-9f8f-b9882850ac8a"). InnerVolumeSpecName "kube-api-access-bk7jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.783621 4900 generic.go:334] "Generic (PLEG): container finished" podID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerID="89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c" exitCode=0 Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.783689 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mp6vg" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.783725 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mp6vg" event={"ID":"cb010fa9-e439-452d-9f8f-b9882850ac8a","Type":"ContainerDied","Data":"89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c"} Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.783816 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mp6vg" event={"ID":"cb010fa9-e439-452d-9f8f-b9882850ac8a","Type":"ContainerDied","Data":"bbe7b9dfa29e19952947d16a4463fb9a88bfba114badaa762c56f5e25564212b"} Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.783851 4900 scope.go:117] "RemoveContainer" containerID="89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.825276 4900 scope.go:117] "RemoveContainer" containerID="7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.835921 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb010fa9-e439-452d-9f8f-b9882850ac8a" (UID: "cb010fa9-e439-452d-9f8f-b9882850ac8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.857043 4900 scope.go:117] "RemoveContainer" containerID="c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.874780 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.874818 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb010fa9-e439-452d-9f8f-b9882850ac8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.874831 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk7jl\" (UniqueName: \"kubernetes.io/projected/cb010fa9-e439-452d-9f8f-b9882850ac8a-kube-api-access-bk7jl\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.885928 4900 scope.go:117] "RemoveContainer" containerID="89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c" Dec 02 14:30:19 crc kubenswrapper[4900]: E1202 14:30:19.886404 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c\": container with ID starting with 89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c not found: ID does not exist" containerID="89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.886485 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c"} err="failed to get container status \"89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c\": rpc error: code = NotFound desc = could not find container \"89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c\": container with ID starting with 89fb68c1ad3664f03f3a5c0103939742a4242611ad581588c38b831d0a16dc4c not found: ID does not exist" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.886533 4900 scope.go:117] "RemoveContainer" containerID="7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29" Dec 02 14:30:19 crc kubenswrapper[4900]: E1202 14:30:19.887246 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29\": container with ID starting with 7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29 not found: ID does not exist" containerID="7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.887284 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29"} err="failed to get container status \"7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29\": rpc error: code = NotFound desc = could not find container \"7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29\": container with ID starting with 7f5ee1e76056692f9233d20c19915c87860af6d258b2187c2856f7211828eb29 not found: ID does not exist" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.887313 4900 scope.go:117] "RemoveContainer" containerID="c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169" Dec 02 14:30:19 crc kubenswrapper[4900]: E1202 14:30:19.888042 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169\": container with ID starting with c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169 not found: ID does not exist" containerID="c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169" Dec 02 14:30:19 crc kubenswrapper[4900]: I1202 14:30:19.888115 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169"} err="failed to get container status \"c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169\": rpc error: code = NotFound desc = could not find container \"c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169\": container with ID starting with c83323d943218ed789b8570273b5bae61e14c2e0ad8e56d5f908c29477d27169 not found: ID does not exist" Dec 02 14:30:20 crc kubenswrapper[4900]: I1202 14:30:20.110599 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mp6vg"] Dec 02 14:30:20 crc kubenswrapper[4900]: I1202 14:30:20.115522 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mp6vg"] Dec 02 14:30:20 crc kubenswrapper[4900]: I1202 14:30:20.930514 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" path="/var/lib/kubelet/pods/cb010fa9-e439-452d-9f8f-b9882850ac8a/volumes" Dec 02 14:30:45 crc kubenswrapper[4900]: I1202 14:30:45.116865 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:30:45 crc kubenswrapper[4900]: I1202 14:30:45.119885 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:30:59 crc kubenswrapper[4900]: I1202 14:30:59.529185 4900 scope.go:117] "RemoveContainer" containerID="64ab29990787ef15b1e31f7b50b444685fad488c5354f36b0d04d47d3284e09a" Dec 02 14:31:15 crc kubenswrapper[4900]: I1202 14:31:15.116688 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:31:15 crc kubenswrapper[4900]: I1202 14:31:15.117297 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.398636 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lk424"] Dec 02 14:31:41 crc kubenswrapper[4900]: E1202 14:31:41.399878 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="extract-utilities" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.399899 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="extract-utilities" Dec 02 14:31:41 crc kubenswrapper[4900]: E1202 14:31:41.399949 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="extract-content" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.399960 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="extract-content" Dec 02 14:31:41 crc kubenswrapper[4900]: E1202 14:31:41.399974 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ea0b54-fc53-41be-914a-699e24d18400" containerName="collect-profiles" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.399985 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ea0b54-fc53-41be-914a-699e24d18400" containerName="collect-profiles" Dec 02 14:31:41 crc kubenswrapper[4900]: E1202 14:31:41.400010 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="registry-server" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.400022 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="registry-server" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.400228 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb010fa9-e439-452d-9f8f-b9882850ac8a" containerName="registry-server" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.400261 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ea0b54-fc53-41be-914a-699e24d18400" containerName="collect-profiles" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.402007 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.423814 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lk424"] Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.530093 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-utilities\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.530161 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-catalog-content\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.530199 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7z52\" (UniqueName: \"kubernetes.io/projected/b7d67d79-1014-4113-ac72-dbc420ba69bb-kube-api-access-h7z52\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.632146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-utilities\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.632257 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-catalog-content\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.632309 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7z52\" (UniqueName: \"kubernetes.io/projected/b7d67d79-1014-4113-ac72-dbc420ba69bb-kube-api-access-h7z52\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.632819 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-utilities\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.633021 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-catalog-content\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.655502 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7z52\" (UniqueName: \"kubernetes.io/projected/b7d67d79-1014-4113-ac72-dbc420ba69bb-kube-api-access-h7z52\") pod \"certified-operators-lk424\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:41 crc kubenswrapper[4900]: I1202 14:31:41.729942 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:42 crc kubenswrapper[4900]: I1202 14:31:42.161162 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lk424"] Dec 02 14:31:42 crc kubenswrapper[4900]: I1202 14:31:42.471746 4900 generic.go:334] "Generic (PLEG): container finished" podID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerID="2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a" exitCode=0 Dec 02 14:31:42 crc kubenswrapper[4900]: I1202 14:31:42.471888 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerDied","Data":"2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a"} Dec 02 14:31:42 crc kubenswrapper[4900]: I1202 14:31:42.472165 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerStarted","Data":"68bbbc32d1527425faaff98ade21d8e3b370e4ae202ae4ea461e1d8c2159e1eb"} Dec 02 14:31:43 crc kubenswrapper[4900]: I1202 14:31:43.483239 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerStarted","Data":"2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27"} Dec 02 14:31:44 crc kubenswrapper[4900]: I1202 14:31:44.495095 4900 generic.go:334] "Generic (PLEG): container finished" podID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerID="2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27" exitCode=0 Dec 02 14:31:44 crc kubenswrapper[4900]: I1202 14:31:44.495140 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerDied","Data":"2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27"} Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.116700 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.116788 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.116847 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.117735 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.117860 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" gracePeriod=600 Dec 02 14:31:45 crc kubenswrapper[4900]: E1202 14:31:45.252059 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.508760 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" exitCode=0 Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.508795 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499"} Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.508869 4900 scope.go:117] "RemoveContainer" containerID="25b11fd4cf71fcd3c2d6b68c2b4317e68b7b5ccf1902e8341fedbdc2b2ddb0f7" Dec 02 14:31:45 crc kubenswrapper[4900]: I1202 14:31:45.509475 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:31:45 crc kubenswrapper[4900]: E1202 14:31:45.509689 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:31:46 crc kubenswrapper[4900]: I1202 14:31:46.518673 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerStarted","Data":"bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e"} Dec 02 14:31:46 crc kubenswrapper[4900]: I1202 14:31:46.547607 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lk424" podStartSLOduration=2.713391869 podStartE2EDuration="5.547587743s" podCreationTimestamp="2025-12-02 14:31:41 +0000 UTC" firstStartedPulling="2025-12-02 14:31:42.474853019 +0000 UTC m=+2947.890666910" lastFinishedPulling="2025-12-02 14:31:45.309048913 +0000 UTC m=+2950.724862784" observedRunningTime="2025-12-02 14:31:46.54428087 +0000 UTC m=+2951.960094721" watchObservedRunningTime="2025-12-02 14:31:46.547587743 +0000 UTC m=+2951.963401594" Dec 02 14:31:51 crc kubenswrapper[4900]: I1202 14:31:51.730333 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:51 crc kubenswrapper[4900]: I1202 14:31:51.731065 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:51 crc kubenswrapper[4900]: I1202 14:31:51.786473 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:52 crc kubenswrapper[4900]: I1202 14:31:52.646027 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:52 crc kubenswrapper[4900]: I1202 14:31:52.706291 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lk424"] Dec 02 14:31:54 crc kubenswrapper[4900]: I1202 14:31:54.598857 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lk424" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="registry-server" containerID="cri-o://bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e" gracePeriod=2 Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.105155 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.266529 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-utilities\") pod \"b7d67d79-1014-4113-ac72-dbc420ba69bb\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.266685 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7z52\" (UniqueName: \"kubernetes.io/projected/b7d67d79-1014-4113-ac72-dbc420ba69bb-kube-api-access-h7z52\") pod \"b7d67d79-1014-4113-ac72-dbc420ba69bb\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.266731 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-catalog-content\") pod \"b7d67d79-1014-4113-ac72-dbc420ba69bb\" (UID: \"b7d67d79-1014-4113-ac72-dbc420ba69bb\") " Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.268062 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-utilities" (OuterVolumeSpecName: "utilities") pod "b7d67d79-1014-4113-ac72-dbc420ba69bb" (UID: "b7d67d79-1014-4113-ac72-dbc420ba69bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.275344 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d67d79-1014-4113-ac72-dbc420ba69bb-kube-api-access-h7z52" (OuterVolumeSpecName: "kube-api-access-h7z52") pod "b7d67d79-1014-4113-ac72-dbc420ba69bb" (UID: "b7d67d79-1014-4113-ac72-dbc420ba69bb"). InnerVolumeSpecName "kube-api-access-h7z52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.325322 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7d67d79-1014-4113-ac72-dbc420ba69bb" (UID: "b7d67d79-1014-4113-ac72-dbc420ba69bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.367898 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7z52\" (UniqueName: \"kubernetes.io/projected/b7d67d79-1014-4113-ac72-dbc420ba69bb-kube-api-access-h7z52\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.367929 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.367938 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d67d79-1014-4113-ac72-dbc420ba69bb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.623786 4900 generic.go:334] "Generic (PLEG): container finished" podID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerID="bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e" exitCode=0 Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.623819 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerDied","Data":"bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e"} Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.623797 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lk424" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.623858 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lk424" event={"ID":"b7d67d79-1014-4113-ac72-dbc420ba69bb","Type":"ContainerDied","Data":"68bbbc32d1527425faaff98ade21d8e3b370e4ae202ae4ea461e1d8c2159e1eb"} Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.623875 4900 scope.go:117] "RemoveContainer" containerID="bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.662221 4900 scope.go:117] "RemoveContainer" containerID="2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.699470 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lk424"] Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.706210 4900 scope.go:117] "RemoveContainer" containerID="2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.713082 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lk424"] Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.733441 4900 scope.go:117] "RemoveContainer" containerID="bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e" Dec 02 14:31:56 crc kubenswrapper[4900]: E1202 14:31:56.734272 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e\": container with ID starting with bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e not found: ID does not exist" containerID="bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.734337 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e"} err="failed to get container status \"bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e\": rpc error: code = NotFound desc = could not find container \"bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e\": container with ID starting with bd97691ad0f7f29785bb9a67c609a2cb3de324c42bd9f7d2f56db5958307941e not found: ID does not exist" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.734374 4900 scope.go:117] "RemoveContainer" containerID="2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27" Dec 02 14:31:56 crc kubenswrapper[4900]: E1202 14:31:56.735033 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27\": container with ID starting with 2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27 not found: ID does not exist" containerID="2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.735080 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27"} err="failed to get container status \"2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27\": rpc error: code = NotFound desc = could not find container \"2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27\": container with ID starting with 2650fbf0e1fbdaf2976db1b437399d4015f2c97f9a3d03a56e97316b255aac27 not found: ID does not exist" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.735136 4900 scope.go:117] "RemoveContainer" containerID="2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a" Dec 02 14:31:56 crc kubenswrapper[4900]: E1202 14:31:56.735707 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a\": container with ID starting with 2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a not found: ID does not exist" containerID="2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.735891 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a"} err="failed to get container status \"2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a\": rpc error: code = NotFound desc = could not find container \"2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a\": container with ID starting with 2b9cf838c9707924c9f5ce90f6d3bca8e0d07f13dfe97fe7f07cfb84a004cc0a not found: ID does not exist" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.910344 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:31:56 crc kubenswrapper[4900]: E1202 14:31:56.910569 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:31:56 crc kubenswrapper[4900]: I1202 14:31:56.922053 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" path="/var/lib/kubelet/pods/b7d67d79-1014-4113-ac72-dbc420ba69bb/volumes" Dec 02 14:32:08 crc kubenswrapper[4900]: I1202 14:32:08.910435 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:32:08 crc kubenswrapper[4900]: E1202 14:32:08.911417 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:32:23 crc kubenswrapper[4900]: I1202 14:32:23.910275 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:32:23 crc kubenswrapper[4900]: E1202 14:32:23.911083 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:32:35 crc kubenswrapper[4900]: I1202 14:32:35.910210 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:32:35 crc kubenswrapper[4900]: E1202 14:32:35.911333 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:32:49 crc kubenswrapper[4900]: I1202 14:32:49.909910 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:32:49 crc kubenswrapper[4900]: E1202 14:32:49.910763 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:33:02 crc kubenswrapper[4900]: I1202 14:33:02.910824 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:33:02 crc kubenswrapper[4900]: E1202 14:33:02.911554 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:33:17 crc kubenswrapper[4900]: I1202 14:33:17.910595 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:33:17 crc kubenswrapper[4900]: E1202 14:33:17.911759 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:33:29 crc kubenswrapper[4900]: I1202 14:33:29.910080 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:33:29 crc kubenswrapper[4900]: E1202 14:33:29.911000 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:33:41 crc kubenswrapper[4900]: I1202 14:33:41.910741 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:33:41 crc kubenswrapper[4900]: E1202 14:33:41.911637 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:33:55 crc kubenswrapper[4900]: I1202 14:33:55.910527 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:33:55 crc kubenswrapper[4900]: E1202 14:33:55.911543 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:34:06 crc kubenswrapper[4900]: I1202 14:34:06.910234 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:34:06 crc kubenswrapper[4900]: E1202 14:34:06.911252 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:34:20 crc kubenswrapper[4900]: I1202 14:34:20.909811 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:34:20 crc kubenswrapper[4900]: E1202 14:34:20.910442 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:34:31 crc kubenswrapper[4900]: I1202 14:34:31.910772 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:34:31 crc kubenswrapper[4900]: E1202 14:34:31.911485 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:34:44 crc kubenswrapper[4900]: I1202 14:34:44.915522 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:34:44 crc kubenswrapper[4900]: E1202 14:34:44.916214 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:34:59 crc kubenswrapper[4900]: I1202 14:34:59.910083 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:34:59 crc kubenswrapper[4900]: E1202 14:34:59.910896 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:35:12 crc kubenswrapper[4900]: I1202 14:35:12.911126 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:35:12 crc kubenswrapper[4900]: E1202 14:35:12.912128 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:35:23 crc kubenswrapper[4900]: I1202 14:35:23.910862 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:35:23 crc kubenswrapper[4900]: E1202 14:35:23.911910 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:35:37 crc kubenswrapper[4900]: I1202 14:35:37.910590 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:35:37 crc kubenswrapper[4900]: E1202 14:35:37.911727 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:35:52 crc kubenswrapper[4900]: I1202 14:35:52.910867 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:35:52 crc kubenswrapper[4900]: E1202 14:35:52.912626 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:36:06 crc kubenswrapper[4900]: I1202 14:36:06.910797 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:36:06 crc kubenswrapper[4900]: E1202 14:36:06.911982 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:36:19 crc kubenswrapper[4900]: I1202 14:36:19.910231 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:36:19 crc kubenswrapper[4900]: E1202 14:36:19.910965 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:36:33 crc kubenswrapper[4900]: I1202 14:36:33.910132 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:36:33 crc kubenswrapper[4900]: E1202 14:36:33.910825 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:36:44 crc kubenswrapper[4900]: I1202 14:36:44.916672 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:36:44 crc kubenswrapper[4900]: E1202 14:36:44.917411 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:36:57 crc kubenswrapper[4900]: I1202 14:36:57.910103 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:36:58 crc kubenswrapper[4900]: I1202 14:36:58.331366 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047"} Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.619382 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ftjdq"] Dec 02 14:37:41 crc kubenswrapper[4900]: E1202 14:37:41.620284 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="registry-server" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.620298 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="registry-server" Dec 02 14:37:41 crc kubenswrapper[4900]: E1202 14:37:41.620320 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="extract-utilities" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.620327 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="extract-utilities" Dec 02 14:37:41 crc kubenswrapper[4900]: E1202 14:37:41.620339 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="extract-content" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.620345 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="extract-content" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.620528 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d67d79-1014-4113-ac72-dbc420ba69bb" containerName="registry-server" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.621767 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.641079 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftjdq"] Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.788540 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-catalog-content\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.788857 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-utilities\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.788928 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gph24\" (UniqueName: \"kubernetes.io/projected/40ff73c3-8178-4bbf-a413-cae95eb8f222-kube-api-access-gph24\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.891329 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gph24\" (UniqueName: \"kubernetes.io/projected/40ff73c3-8178-4bbf-a413-cae95eb8f222-kube-api-access-gph24\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.891468 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-catalog-content\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.891577 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-utilities\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.892310 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-utilities\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.892309 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-catalog-content\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.914619 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gph24\" (UniqueName: \"kubernetes.io/projected/40ff73c3-8178-4bbf-a413-cae95eb8f222-kube-api-access-gph24\") pod \"redhat-operators-ftjdq\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:41 crc kubenswrapper[4900]: I1202 14:37:41.950431 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:42 crc kubenswrapper[4900]: I1202 14:37:42.431838 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftjdq"] Dec 02 14:37:42 crc kubenswrapper[4900]: I1202 14:37:42.729625 4900 generic.go:334] "Generic (PLEG): container finished" podID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerID="77b64bf866a822fec19287cb1520582f2fdbfcc6e869f9be835984a64e2752cb" exitCode=0 Dec 02 14:37:42 crc kubenswrapper[4900]: I1202 14:37:42.729702 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerDied","Data":"77b64bf866a822fec19287cb1520582f2fdbfcc6e869f9be835984a64e2752cb"} Dec 02 14:37:42 crc kubenswrapper[4900]: I1202 14:37:42.729965 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerStarted","Data":"527c61acb01fede57822bda57d5de6290443bd4fee535f126f4fd7d0dbc57c4d"} Dec 02 14:37:42 crc kubenswrapper[4900]: I1202 14:37:42.731783 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:37:44 crc kubenswrapper[4900]: I1202 14:37:44.748342 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerStarted","Data":"72c0a44c053386b8bf8fe0335a3989d1b9b791284ac35112b946ea2dfe091662"} Dec 02 14:37:45 crc kubenswrapper[4900]: I1202 14:37:45.761480 4900 generic.go:334] "Generic (PLEG): container finished" podID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerID="72c0a44c053386b8bf8fe0335a3989d1b9b791284ac35112b946ea2dfe091662" exitCode=0 Dec 02 14:37:45 crc kubenswrapper[4900]: I1202 14:37:45.761527 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerDied","Data":"72c0a44c053386b8bf8fe0335a3989d1b9b791284ac35112b946ea2dfe091662"} Dec 02 14:37:46 crc kubenswrapper[4900]: I1202 14:37:46.790285 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerStarted","Data":"db2c7ea78feb274d833b7be7acff3fff71111bf7e0ede22d5f2054686e594643"} Dec 02 14:37:46 crc kubenswrapper[4900]: I1202 14:37:46.816250 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ftjdq" podStartSLOduration=2.245761716 podStartE2EDuration="5.81622721s" podCreationTimestamp="2025-12-02 14:37:41 +0000 UTC" firstStartedPulling="2025-12-02 14:37:42.731499185 +0000 UTC m=+3308.147313036" lastFinishedPulling="2025-12-02 14:37:46.301964679 +0000 UTC m=+3311.717778530" observedRunningTime="2025-12-02 14:37:46.812241978 +0000 UTC m=+3312.228055839" watchObservedRunningTime="2025-12-02 14:37:46.81622721 +0000 UTC m=+3312.232041071" Dec 02 14:37:51 crc kubenswrapper[4900]: I1202 14:37:51.951369 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:51 crc kubenswrapper[4900]: I1202 14:37:51.951867 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:52 crc kubenswrapper[4900]: I1202 14:37:52.001584 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:52 crc kubenswrapper[4900]: I1202 14:37:52.950510 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:53 crc kubenswrapper[4900]: I1202 14:37:53.002214 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftjdq"] Dec 02 14:37:54 crc kubenswrapper[4900]: I1202 14:37:54.908879 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ftjdq" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="registry-server" containerID="cri-o://db2c7ea78feb274d833b7be7acff3fff71111bf7e0ede22d5f2054686e594643" gracePeriod=2 Dec 02 14:37:56 crc kubenswrapper[4900]: I1202 14:37:56.928957 4900 generic.go:334] "Generic (PLEG): container finished" podID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerID="db2c7ea78feb274d833b7be7acff3fff71111bf7e0ede22d5f2054686e594643" exitCode=0 Dec 02 14:37:56 crc kubenswrapper[4900]: I1202 14:37:56.929262 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerDied","Data":"db2c7ea78feb274d833b7be7acff3fff71111bf7e0ede22d5f2054686e594643"} Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.212865 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.264547 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-catalog-content\") pod \"40ff73c3-8178-4bbf-a413-cae95eb8f222\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.264846 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-utilities\") pod \"40ff73c3-8178-4bbf-a413-cae95eb8f222\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.264906 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gph24\" (UniqueName: \"kubernetes.io/projected/40ff73c3-8178-4bbf-a413-cae95eb8f222-kube-api-access-gph24\") pod \"40ff73c3-8178-4bbf-a413-cae95eb8f222\" (UID: \"40ff73c3-8178-4bbf-a413-cae95eb8f222\") " Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.265736 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-utilities" (OuterVolumeSpecName: "utilities") pod "40ff73c3-8178-4bbf-a413-cae95eb8f222" (UID: "40ff73c3-8178-4bbf-a413-cae95eb8f222"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.271829 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ff73c3-8178-4bbf-a413-cae95eb8f222-kube-api-access-gph24" (OuterVolumeSpecName: "kube-api-access-gph24") pod "40ff73c3-8178-4bbf-a413-cae95eb8f222" (UID: "40ff73c3-8178-4bbf-a413-cae95eb8f222"). InnerVolumeSpecName "kube-api-access-gph24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.366998 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gph24\" (UniqueName: \"kubernetes.io/projected/40ff73c3-8178-4bbf-a413-cae95eb8f222-kube-api-access-gph24\") on node \"crc\" DevicePath \"\"" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.367042 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.372158 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40ff73c3-8178-4bbf-a413-cae95eb8f222" (UID: "40ff73c3-8178-4bbf-a413-cae95eb8f222"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.467636 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ff73c3-8178-4bbf-a413-cae95eb8f222-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.942185 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftjdq" event={"ID":"40ff73c3-8178-4bbf-a413-cae95eb8f222","Type":"ContainerDied","Data":"527c61acb01fede57822bda57d5de6290443bd4fee535f126f4fd7d0dbc57c4d"} Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.942243 4900 scope.go:117] "RemoveContainer" containerID="db2c7ea78feb274d833b7be7acff3fff71111bf7e0ede22d5f2054686e594643" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.942269 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftjdq" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.961521 4900 scope.go:117] "RemoveContainer" containerID="72c0a44c053386b8bf8fe0335a3989d1b9b791284ac35112b946ea2dfe091662" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.995634 4900 scope.go:117] "RemoveContainer" containerID="77b64bf866a822fec19287cb1520582f2fdbfcc6e869f9be835984a64e2752cb" Dec 02 14:37:57 crc kubenswrapper[4900]: I1202 14:37:57.997181 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftjdq"] Dec 02 14:37:58 crc kubenswrapper[4900]: I1202 14:37:58.003030 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ftjdq"] Dec 02 14:37:58 crc kubenswrapper[4900]: I1202 14:37:58.920093 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" path="/var/lib/kubelet/pods/40ff73c3-8178-4bbf-a413-cae95eb8f222/volumes" Dec 02 14:39:15 crc kubenswrapper[4900]: I1202 14:39:15.116919 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:39:15 crc kubenswrapper[4900]: I1202 14:39:15.117460 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.204688 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8dsmw"] Dec 02 14:39:20 crc kubenswrapper[4900]: E1202 14:39:20.205766 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="registry-server" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.205791 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="registry-server" Dec 02 14:39:20 crc kubenswrapper[4900]: E1202 14:39:20.205810 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="extract-utilities" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.205819 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="extract-utilities" Dec 02 14:39:20 crc kubenswrapper[4900]: E1202 14:39:20.205855 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="extract-content" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.205863 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="extract-content" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.206037 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ff73c3-8178-4bbf-a413-cae95eb8f222" containerName="registry-server" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.207354 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.212723 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dsmw"] Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.384313 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpflh\" (UniqueName: \"kubernetes.io/projected/17047842-a2d3-4c08-9ef3-8f887a995092-kube-api-access-fpflh\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.384397 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-catalog-content\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.384625 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-utilities\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.486151 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-catalog-content\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.486251 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-utilities\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.486331 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpflh\" (UniqueName: \"kubernetes.io/projected/17047842-a2d3-4c08-9ef3-8f887a995092-kube-api-access-fpflh\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.486866 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-catalog-content\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.486935 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-utilities\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.516583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpflh\" (UniqueName: \"kubernetes.io/projected/17047842-a2d3-4c08-9ef3-8f887a995092-kube-api-access-fpflh\") pod \"redhat-marketplace-8dsmw\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:20 crc kubenswrapper[4900]: I1202 14:39:20.576544 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:21 crc kubenswrapper[4900]: I1202 14:39:21.037928 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dsmw"] Dec 02 14:39:21 crc kubenswrapper[4900]: I1202 14:39:21.663535 4900 generic.go:334] "Generic (PLEG): container finished" podID="17047842-a2d3-4c08-9ef3-8f887a995092" containerID="835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f" exitCode=0 Dec 02 14:39:21 crc kubenswrapper[4900]: I1202 14:39:21.663701 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dsmw" event={"ID":"17047842-a2d3-4c08-9ef3-8f887a995092","Type":"ContainerDied","Data":"835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f"} Dec 02 14:39:21 crc kubenswrapper[4900]: I1202 14:39:21.664119 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dsmw" event={"ID":"17047842-a2d3-4c08-9ef3-8f887a995092","Type":"ContainerStarted","Data":"7d7779775bd2b5346b940048fa40ed861beb0c825260ede9854bfb790bba459d"} Dec 02 14:39:23 crc kubenswrapper[4900]: I1202 14:39:23.685234 4900 generic.go:334] "Generic (PLEG): container finished" podID="17047842-a2d3-4c08-9ef3-8f887a995092" containerID="b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2" exitCode=0 Dec 02 14:39:23 crc kubenswrapper[4900]: I1202 14:39:23.685366 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dsmw" event={"ID":"17047842-a2d3-4c08-9ef3-8f887a995092","Type":"ContainerDied","Data":"b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2"} Dec 02 14:39:24 crc kubenswrapper[4900]: I1202 14:39:24.698525 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dsmw" event={"ID":"17047842-a2d3-4c08-9ef3-8f887a995092","Type":"ContainerStarted","Data":"7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef"} Dec 02 14:39:24 crc kubenswrapper[4900]: I1202 14:39:24.723295 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8dsmw" podStartSLOduration=2.26842533 podStartE2EDuration="4.723248122s" podCreationTimestamp="2025-12-02 14:39:20 +0000 UTC" firstStartedPulling="2025-12-02 14:39:21.665518576 +0000 UTC m=+3407.081332457" lastFinishedPulling="2025-12-02 14:39:24.120341398 +0000 UTC m=+3409.536155249" observedRunningTime="2025-12-02 14:39:24.719863717 +0000 UTC m=+3410.135677658" watchObservedRunningTime="2025-12-02 14:39:24.723248122 +0000 UTC m=+3410.139062013" Dec 02 14:39:30 crc kubenswrapper[4900]: I1202 14:39:30.577206 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:30 crc kubenswrapper[4900]: I1202 14:39:30.578555 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:30 crc kubenswrapper[4900]: I1202 14:39:30.656848 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:30 crc kubenswrapper[4900]: I1202 14:39:30.823538 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:30 crc kubenswrapper[4900]: I1202 14:39:30.906172 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dsmw"] Dec 02 14:39:32 crc kubenswrapper[4900]: I1202 14:39:32.967821 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8dsmw" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="registry-server" containerID="cri-o://7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef" gracePeriod=2 Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.533361 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.595062 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-utilities\") pod \"17047842-a2d3-4c08-9ef3-8f887a995092\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.595440 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-catalog-content\") pod \"17047842-a2d3-4c08-9ef3-8f887a995092\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.595637 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpflh\" (UniqueName: \"kubernetes.io/projected/17047842-a2d3-4c08-9ef3-8f887a995092-kube-api-access-fpflh\") pod \"17047842-a2d3-4c08-9ef3-8f887a995092\" (UID: \"17047842-a2d3-4c08-9ef3-8f887a995092\") " Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.596086 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-utilities" (OuterVolumeSpecName: "utilities") pod "17047842-a2d3-4c08-9ef3-8f887a995092" (UID: "17047842-a2d3-4c08-9ef3-8f887a995092"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.600671 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17047842-a2d3-4c08-9ef3-8f887a995092-kube-api-access-fpflh" (OuterVolumeSpecName: "kube-api-access-fpflh") pod "17047842-a2d3-4c08-9ef3-8f887a995092" (UID: "17047842-a2d3-4c08-9ef3-8f887a995092"). InnerVolumeSpecName "kube-api-access-fpflh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.614479 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17047842-a2d3-4c08-9ef3-8f887a995092" (UID: "17047842-a2d3-4c08-9ef3-8f887a995092"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.697506 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.697550 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17047842-a2d3-4c08-9ef3-8f887a995092-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.697562 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpflh\" (UniqueName: \"kubernetes.io/projected/17047842-a2d3-4c08-9ef3-8f887a995092-kube-api-access-fpflh\") on node \"crc\" DevicePath \"\"" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.988401 4900 generic.go:334] "Generic (PLEG): container finished" podID="17047842-a2d3-4c08-9ef3-8f887a995092" containerID="7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef" exitCode=0 Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.988441 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dsmw" event={"ID":"17047842-a2d3-4c08-9ef3-8f887a995092","Type":"ContainerDied","Data":"7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef"} Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.988472 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dsmw" event={"ID":"17047842-a2d3-4c08-9ef3-8f887a995092","Type":"ContainerDied","Data":"7d7779775bd2b5346b940048fa40ed861beb0c825260ede9854bfb790bba459d"} Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.988489 4900 scope.go:117] "RemoveContainer" containerID="7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef" Dec 02 14:39:34 crc kubenswrapper[4900]: I1202 14:39:34.988488 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dsmw" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.017188 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dsmw"] Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.022324 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dsmw"] Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.029086 4900 scope.go:117] "RemoveContainer" containerID="b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.058768 4900 scope.go:117] "RemoveContainer" containerID="835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.083818 4900 scope.go:117] "RemoveContainer" containerID="7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef" Dec 02 14:39:35 crc kubenswrapper[4900]: E1202 14:39:35.084302 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef\": container with ID starting with 7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef not found: ID does not exist" containerID="7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.084339 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef"} err="failed to get container status \"7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef\": rpc error: code = NotFound desc = could not find container \"7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef\": container with ID starting with 7bdefe8f3f0da6e0bbf2e07da8bf044d0a7bbb8587cbd1c1ea651dec908e35ef not found: ID does not exist" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.084365 4900 scope.go:117] "RemoveContainer" containerID="b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2" Dec 02 14:39:35 crc kubenswrapper[4900]: E1202 14:39:35.084705 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2\": container with ID starting with b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2 not found: ID does not exist" containerID="b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.084744 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2"} err="failed to get container status \"b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2\": rpc error: code = NotFound desc = could not find container \"b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2\": container with ID starting with b9b4967dbc65f4c294ff56476038ac6ebbaaa31f71277b74939012d60836e6a2 not found: ID does not exist" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.084769 4900 scope.go:117] "RemoveContainer" containerID="835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f" Dec 02 14:39:35 crc kubenswrapper[4900]: E1202 14:39:35.085636 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f\": container with ID starting with 835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f not found: ID does not exist" containerID="835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f" Dec 02 14:39:35 crc kubenswrapper[4900]: I1202 14:39:35.085719 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f"} err="failed to get container status \"835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f\": rpc error: code = NotFound desc = could not find container \"835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f\": container with ID starting with 835dbf12af2e0a2275803e8adbb30f1b4d4f0a76c72bd3cd452a263d7d8d485f not found: ID does not exist" Dec 02 14:39:36 crc kubenswrapper[4900]: I1202 14:39:36.928729 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" path="/var/lib/kubelet/pods/17047842-a2d3-4c08-9ef3-8f887a995092/volumes" Dec 02 14:39:44 crc kubenswrapper[4900]: E1202 14:39:44.571870 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17047842_a2d3_4c08_9ef3_8f887a995092.slice\": RecentStats: unable to find data in memory cache]" Dec 02 14:39:45 crc kubenswrapper[4900]: I1202 14:39:45.116982 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:39:45 crc kubenswrapper[4900]: I1202 14:39:45.117253 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:39:54 crc kubenswrapper[4900]: E1202 14:39:54.772294 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17047842_a2d3_4c08_9ef3_8f887a995092.slice\": RecentStats: unable to find data in memory cache]" Dec 02 14:40:05 crc kubenswrapper[4900]: E1202 14:40:05.009843 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17047842_a2d3_4c08_9ef3_8f887a995092.slice\": RecentStats: unable to find data in memory cache]" Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.116127 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.116612 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.116696 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.117474 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.117532 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047" gracePeriod=600 Dec 02 14:40:15 crc kubenswrapper[4900]: E1202 14:40:15.234493 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17047842_a2d3_4c08_9ef3_8f887a995092.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8f7b18_f260_4beb_b4ff_0af7e505c7d1.slice/crio-a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.367083 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047" exitCode=0 Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.367124 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047"} Dec 02 14:40:15 crc kubenswrapper[4900]: I1202 14:40:15.367153 4900 scope.go:117] "RemoveContainer" containerID="27d0763273d3fb197e875226754730a5c48a3e70371365fe76e409a7cc14a499" Dec 02 14:40:16 crc kubenswrapper[4900]: I1202 14:40:16.380231 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421"} Dec 02 14:40:25 crc kubenswrapper[4900]: E1202 14:40:25.435992 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17047842_a2d3_4c08_9ef3_8f887a995092.slice\": RecentStats: unable to find data in memory cache]" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.432857 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g27zj"] Dec 02 14:41:48 crc kubenswrapper[4900]: E1202 14:41:48.435728 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="extract-utilities" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.435906 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="extract-utilities" Dec 02 14:41:48 crc kubenswrapper[4900]: E1202 14:41:48.436071 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="registry-server" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.436197 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="registry-server" Dec 02 14:41:48 crc kubenswrapper[4900]: E1202 14:41:48.436333 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="extract-content" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.436560 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="extract-content" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.436994 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="17047842-a2d3-4c08-9ef3-8f887a995092" containerName="registry-server" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.439211 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.463070 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g27zj"] Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.513096 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-catalog-content\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.513154 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-utilities\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.513220 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9ct\" (UniqueName: \"kubernetes.io/projected/a9e35c49-02e1-4af8-8c03-b4659d530835-kube-api-access-js9ct\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.615317 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-catalog-content\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.615364 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-utilities\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.615408 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9ct\" (UniqueName: \"kubernetes.io/projected/a9e35c49-02e1-4af8-8c03-b4659d530835-kube-api-access-js9ct\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.615908 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-catalog-content\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.615982 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-utilities\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.647280 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9ct\" (UniqueName: \"kubernetes.io/projected/a9e35c49-02e1-4af8-8c03-b4659d530835-kube-api-access-js9ct\") pod \"certified-operators-g27zj\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:48 crc kubenswrapper[4900]: I1202 14:41:48.765397 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:49 crc kubenswrapper[4900]: I1202 14:41:49.100897 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g27zj"] Dec 02 14:41:49 crc kubenswrapper[4900]: I1202 14:41:49.255818 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g27zj" event={"ID":"a9e35c49-02e1-4af8-8c03-b4659d530835","Type":"ContainerStarted","Data":"d35327422740787883e7cb81a2527c5e92af86688d6cbb6d30eb43aab84a6ad5"} Dec 02 14:41:50 crc kubenswrapper[4900]: I1202 14:41:50.268953 4900 generic.go:334] "Generic (PLEG): container finished" podID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerID="b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab" exitCode=0 Dec 02 14:41:50 crc kubenswrapper[4900]: I1202 14:41:50.269016 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g27zj" event={"ID":"a9e35c49-02e1-4af8-8c03-b4659d530835","Type":"ContainerDied","Data":"b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab"} Dec 02 14:41:52 crc kubenswrapper[4900]: I1202 14:41:52.288913 4900 generic.go:334] "Generic (PLEG): container finished" podID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerID="8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e" exitCode=0 Dec 02 14:41:52 crc kubenswrapper[4900]: I1202 14:41:52.288974 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g27zj" event={"ID":"a9e35c49-02e1-4af8-8c03-b4659d530835","Type":"ContainerDied","Data":"8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e"} Dec 02 14:41:53 crc kubenswrapper[4900]: I1202 14:41:53.298122 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g27zj" event={"ID":"a9e35c49-02e1-4af8-8c03-b4659d530835","Type":"ContainerStarted","Data":"31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226"} Dec 02 14:41:53 crc kubenswrapper[4900]: I1202 14:41:53.321706 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g27zj" podStartSLOduration=2.803826328 podStartE2EDuration="5.321682382s" podCreationTimestamp="2025-12-02 14:41:48 +0000 UTC" firstStartedPulling="2025-12-02 14:41:50.271750171 +0000 UTC m=+3555.687564032" lastFinishedPulling="2025-12-02 14:41:52.789606235 +0000 UTC m=+3558.205420086" observedRunningTime="2025-12-02 14:41:53.318896814 +0000 UTC m=+3558.734710665" watchObservedRunningTime="2025-12-02 14:41:53.321682382 +0000 UTC m=+3558.737496253" Dec 02 14:41:58 crc kubenswrapper[4900]: I1202 14:41:58.765569 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:58 crc kubenswrapper[4900]: I1202 14:41:58.766468 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:58 crc kubenswrapper[4900]: I1202 14:41:58.825995 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:59 crc kubenswrapper[4900]: I1202 14:41:59.417331 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:41:59 crc kubenswrapper[4900]: I1202 14:41:59.477442 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g27zj"] Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.376825 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g27zj" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="registry-server" containerID="cri-o://31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226" gracePeriod=2 Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.864602 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.953215 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-catalog-content\") pod \"a9e35c49-02e1-4af8-8c03-b4659d530835\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.953310 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js9ct\" (UniqueName: \"kubernetes.io/projected/a9e35c49-02e1-4af8-8c03-b4659d530835-kube-api-access-js9ct\") pod \"a9e35c49-02e1-4af8-8c03-b4659d530835\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.953366 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-utilities\") pod \"a9e35c49-02e1-4af8-8c03-b4659d530835\" (UID: \"a9e35c49-02e1-4af8-8c03-b4659d530835\") " Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.955028 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-utilities" (OuterVolumeSpecName: "utilities") pod "a9e35c49-02e1-4af8-8c03-b4659d530835" (UID: "a9e35c49-02e1-4af8-8c03-b4659d530835"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:42:01 crc kubenswrapper[4900]: I1202 14:42:01.959550 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e35c49-02e1-4af8-8c03-b4659d530835-kube-api-access-js9ct" (OuterVolumeSpecName: "kube-api-access-js9ct") pod "a9e35c49-02e1-4af8-8c03-b4659d530835" (UID: "a9e35c49-02e1-4af8-8c03-b4659d530835"). InnerVolumeSpecName "kube-api-access-js9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.037800 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9e35c49-02e1-4af8-8c03-b4659d530835" (UID: "a9e35c49-02e1-4af8-8c03-b4659d530835"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.055113 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.055143 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9e35c49-02e1-4af8-8c03-b4659d530835-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.055158 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js9ct\" (UniqueName: \"kubernetes.io/projected/a9e35c49-02e1-4af8-8c03-b4659d530835-kube-api-access-js9ct\") on node \"crc\" DevicePath \"\"" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.393027 4900 generic.go:334] "Generic (PLEG): container finished" podID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerID="31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226" exitCode=0 Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.393101 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g27zj" event={"ID":"a9e35c49-02e1-4af8-8c03-b4659d530835","Type":"ContainerDied","Data":"31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226"} Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.393136 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g27zj" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.393227 4900 scope.go:117] "RemoveContainer" containerID="31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.393202 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g27zj" event={"ID":"a9e35c49-02e1-4af8-8c03-b4659d530835","Type":"ContainerDied","Data":"d35327422740787883e7cb81a2527c5e92af86688d6cbb6d30eb43aab84a6ad5"} Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.429833 4900 scope.go:117] "RemoveContainer" containerID="8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.461584 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g27zj"] Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.468252 4900 scope.go:117] "RemoveContainer" containerID="b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.475197 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g27zj"] Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.516969 4900 scope.go:117] "RemoveContainer" containerID="31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226" Dec 02 14:42:02 crc kubenswrapper[4900]: E1202 14:42:02.517683 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226\": container with ID starting with 31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226 not found: ID does not exist" containerID="31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.517758 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226"} err="failed to get container status \"31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226\": rpc error: code = NotFound desc = could not find container \"31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226\": container with ID starting with 31679a6ee3b73a7aff1a7985fc14bda8f8862f6c4c0dd38aab58fb77e58f0226 not found: ID does not exist" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.517808 4900 scope.go:117] "RemoveContainer" containerID="8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e" Dec 02 14:42:02 crc kubenswrapper[4900]: E1202 14:42:02.518571 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e\": container with ID starting with 8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e not found: ID does not exist" containerID="8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.518729 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e"} err="failed to get container status \"8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e\": rpc error: code = NotFound desc = could not find container \"8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e\": container with ID starting with 8124cfa5e1c094f588e432bf9cd3de23cce3fa54727c1dc3bb1dd1941cca370e not found: ID does not exist" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.518773 4900 scope.go:117] "RemoveContainer" containerID="b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab" Dec 02 14:42:02 crc kubenswrapper[4900]: E1202 14:42:02.519353 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab\": container with ID starting with b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab not found: ID does not exist" containerID="b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.519430 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab"} err="failed to get container status \"b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab\": rpc error: code = NotFound desc = could not find container \"b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab\": container with ID starting with b41c06a652ff5a45cc8cc8d7cd8ae7a4f6cfa3d735a42bb3262707b0c4cbabab not found: ID does not exist" Dec 02 14:42:02 crc kubenswrapper[4900]: I1202 14:42:02.926407 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" path="/var/lib/kubelet/pods/a9e35c49-02e1-4af8-8c03-b4659d530835/volumes" Dec 02 14:42:15 crc kubenswrapper[4900]: I1202 14:42:15.117127 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:42:15 crc kubenswrapper[4900]: I1202 14:42:15.117669 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:42:45 crc kubenswrapper[4900]: I1202 14:42:45.117027 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:42:45 crc kubenswrapper[4900]: I1202 14:42:45.117998 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:43:15 crc kubenswrapper[4900]: I1202 14:43:15.116740 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:43:15 crc kubenswrapper[4900]: I1202 14:43:15.117451 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:43:15 crc kubenswrapper[4900]: I1202 14:43:15.117514 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:43:15 crc kubenswrapper[4900]: I1202 14:43:15.118566 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:43:15 crc kubenswrapper[4900]: I1202 14:43:15.118714 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" gracePeriod=600 Dec 02 14:43:15 crc kubenswrapper[4900]: E1202 14:43:15.255746 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:43:16 crc kubenswrapper[4900]: I1202 14:43:16.144791 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" exitCode=0 Dec 02 14:43:16 crc kubenswrapper[4900]: I1202 14:43:16.144899 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421"} Dec 02 14:43:16 crc kubenswrapper[4900]: I1202 14:43:16.145198 4900 scope.go:117] "RemoveContainer" containerID="a882a1b3cff6db048c62ae51521a947967772832f7a499093c51e175f53e3047" Dec 02 14:43:16 crc kubenswrapper[4900]: I1202 14:43:16.145810 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:43:16 crc kubenswrapper[4900]: E1202 14:43:16.146249 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:43:29 crc kubenswrapper[4900]: I1202 14:43:29.909954 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:43:29 crc kubenswrapper[4900]: E1202 14:43:29.911383 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:43:42 crc kubenswrapper[4900]: I1202 14:43:42.909900 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:43:42 crc kubenswrapper[4900]: E1202 14:43:42.910975 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:43:57 crc kubenswrapper[4900]: I1202 14:43:57.910340 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:43:57 crc kubenswrapper[4900]: E1202 14:43:57.911002 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:44:09 crc kubenswrapper[4900]: I1202 14:44:09.910455 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:44:09 crc kubenswrapper[4900]: E1202 14:44:09.911560 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:44:24 crc kubenswrapper[4900]: I1202 14:44:24.920636 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:44:24 crc kubenswrapper[4900]: E1202 14:44:24.921835 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:44:39 crc kubenswrapper[4900]: I1202 14:44:39.910022 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:44:39 crc kubenswrapper[4900]: E1202 14:44:39.910724 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:44:50 crc kubenswrapper[4900]: I1202 14:44:50.910535 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:44:50 crc kubenswrapper[4900]: E1202 14:44:50.911444 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.171060 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln"] Dec 02 14:45:00 crc kubenswrapper[4900]: E1202 14:45:00.172303 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="extract-utilities" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.172336 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="extract-utilities" Dec 02 14:45:00 crc kubenswrapper[4900]: E1202 14:45:00.172381 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="registry-server" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.172397 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="registry-server" Dec 02 14:45:00 crc kubenswrapper[4900]: E1202 14:45:00.172425 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="extract-content" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.172438 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="extract-content" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.172797 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e35c49-02e1-4af8-8c03-b4659d530835" containerName="registry-server" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.173885 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.176345 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.177221 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.181629 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln"] Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.251373 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-secret-volume\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.251756 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbchk\" (UniqueName: \"kubernetes.io/projected/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-kube-api-access-rbchk\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.251783 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-config-volume\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.352812 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbchk\" (UniqueName: \"kubernetes.io/projected/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-kube-api-access-rbchk\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.352880 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-config-volume\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.352987 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-secret-volume\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.353802 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-config-volume\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.358532 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-secret-volume\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.372575 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbchk\" (UniqueName: \"kubernetes.io/projected/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-kube-api-access-rbchk\") pod \"collect-profiles-29411445-rttln\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.503361 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:00 crc kubenswrapper[4900]: I1202 14:45:00.990568 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln"] Dec 02 14:45:01 crc kubenswrapper[4900]: I1202 14:45:01.115720 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" event={"ID":"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed","Type":"ContainerStarted","Data":"76063037c853800874777a450748f2354487df15adcad6e1e159c26ffc993c76"} Dec 02 14:45:02 crc kubenswrapper[4900]: I1202 14:45:02.126907 4900 generic.go:334] "Generic (PLEG): container finished" podID="ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" containerID="6b22e05c3fa923542344a5c19b5ff3cb1ce7d10f6d210af5ade8a393d93415e4" exitCode=0 Dec 02 14:45:02 crc kubenswrapper[4900]: I1202 14:45:02.127270 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" event={"ID":"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed","Type":"ContainerDied","Data":"6b22e05c3fa923542344a5c19b5ff3cb1ce7d10f6d210af5ade8a393d93415e4"} Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.490412 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.596222 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-config-volume\") pod \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.596395 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-secret-volume\") pod \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.596440 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbchk\" (UniqueName: \"kubernetes.io/projected/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-kube-api-access-rbchk\") pod \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\" (UID: \"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed\") " Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.596999 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" (UID: "ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.602964 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-kube-api-access-rbchk" (OuterVolumeSpecName: "kube-api-access-rbchk") pod "ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" (UID: "ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed"). InnerVolumeSpecName "kube-api-access-rbchk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.603140 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" (UID: "ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.697601 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbchk\" (UniqueName: \"kubernetes.io/projected/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-kube-api-access-rbchk\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.697664 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:03 crc kubenswrapper[4900]: I1202 14:45:03.697682 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:04 crc kubenswrapper[4900]: I1202 14:45:04.146041 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" event={"ID":"ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed","Type":"ContainerDied","Data":"76063037c853800874777a450748f2354487df15adcad6e1e159c26ffc993c76"} Dec 02 14:45:04 crc kubenswrapper[4900]: I1202 14:45:04.146083 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76063037c853800874777a450748f2354487df15adcad6e1e159c26ffc993c76" Dec 02 14:45:04 crc kubenswrapper[4900]: I1202 14:45:04.146144 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln" Dec 02 14:45:04 crc kubenswrapper[4900]: I1202 14:45:04.569675 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j"] Dec 02 14:45:04 crc kubenswrapper[4900]: I1202 14:45:04.575032 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-ghj2j"] Dec 02 14:45:04 crc kubenswrapper[4900]: I1202 14:45:04.927258 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0e9f79-e00b-4f50-9dac-35ba58716c2a" path="/var/lib/kubelet/pods/7b0e9f79-e00b-4f50-9dac-35ba58716c2a/volumes" Dec 02 14:45:05 crc kubenswrapper[4900]: I1202 14:45:05.910481 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:45:05 crc kubenswrapper[4900]: E1202 14:45:05.910712 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:45:20 crc kubenswrapper[4900]: I1202 14:45:20.910856 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:45:20 crc kubenswrapper[4900]: E1202 14:45:20.912047 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:45:32 crc kubenswrapper[4900]: I1202 14:45:32.910105 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:45:32 crc kubenswrapper[4900]: E1202 14:45:32.910730 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:45:45 crc kubenswrapper[4900]: I1202 14:45:45.909850 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:45:45 crc kubenswrapper[4900]: E1202 14:45:45.910619 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:45:56 crc kubenswrapper[4900]: I1202 14:45:56.910420 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:45:56 crc kubenswrapper[4900]: E1202 14:45:56.911150 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:45:59 crc kubenswrapper[4900]: I1202 14:45:59.935973 4900 scope.go:117] "RemoveContainer" containerID="de128c0018ee69f7637c11ccbe47246a83da49088a4c5ba17152d641b243a8a6" Dec 02 14:46:09 crc kubenswrapper[4900]: I1202 14:46:09.910840 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:46:09 crc kubenswrapper[4900]: E1202 14:46:09.912441 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:46:21 crc kubenswrapper[4900]: I1202 14:46:21.911022 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:46:21 crc kubenswrapper[4900]: E1202 14:46:21.912705 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:46:35 crc kubenswrapper[4900]: I1202 14:46:35.909361 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:46:35 crc kubenswrapper[4900]: E1202 14:46:35.909959 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:46:47 crc kubenswrapper[4900]: I1202 14:46:47.910960 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:46:47 crc kubenswrapper[4900]: E1202 14:46:47.912220 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.011098 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jsrjk"] Dec 02 14:46:54 crc kubenswrapper[4900]: E1202 14:46:54.012233 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" containerName="collect-profiles" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.012256 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" containerName="collect-profiles" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.012579 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" containerName="collect-profiles" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.014546 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.022360 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jsrjk"] Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.199697 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-utilities\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.199738 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-catalog-content\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.199793 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dzw\" (UniqueName: \"kubernetes.io/projected/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-kube-api-access-77dzw\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.301091 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-utilities\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.302197 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-catalog-content\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.301547 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-utilities\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.302426 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77dzw\" (UniqueName: \"kubernetes.io/projected/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-kube-api-access-77dzw\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.302512 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-catalog-content\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.333719 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77dzw\" (UniqueName: \"kubernetes.io/projected/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-kube-api-access-77dzw\") pod \"community-operators-jsrjk\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.364194 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:46:54 crc kubenswrapper[4900]: I1202 14:46:54.860164 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jsrjk"] Dec 02 14:46:55 crc kubenswrapper[4900]: I1202 14:46:55.126067 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsrjk" event={"ID":"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b","Type":"ContainerStarted","Data":"2e7ed41bc369d90aa740179a877e291797a1efed335dd952679c39f93a036b5f"} Dec 02 14:46:56 crc kubenswrapper[4900]: I1202 14:46:56.135355 4900 generic.go:334] "Generic (PLEG): container finished" podID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerID="5ca71db0a802e4fb6878dd912db0fbad8a3e1cbbb1e9732da23f163c2e353f56" exitCode=0 Dec 02 14:46:56 crc kubenswrapper[4900]: I1202 14:46:56.135397 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsrjk" event={"ID":"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b","Type":"ContainerDied","Data":"5ca71db0a802e4fb6878dd912db0fbad8a3e1cbbb1e9732da23f163c2e353f56"} Dec 02 14:46:56 crc kubenswrapper[4900]: I1202 14:46:56.136879 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:46:58 crc kubenswrapper[4900]: I1202 14:46:58.149264 4900 generic.go:334] "Generic (PLEG): container finished" podID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerID="3f4d4244bae4a2bb5c85a2520d79cc2310d7e2eb9977da5b57ffe0163b93d4bf" exitCode=0 Dec 02 14:46:58 crc kubenswrapper[4900]: I1202 14:46:58.149324 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsrjk" event={"ID":"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b","Type":"ContainerDied","Data":"3f4d4244bae4a2bb5c85a2520d79cc2310d7e2eb9977da5b57ffe0163b93d4bf"} Dec 02 14:46:59 crc kubenswrapper[4900]: I1202 14:46:59.157247 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsrjk" event={"ID":"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b","Type":"ContainerStarted","Data":"5f379c5a95eefe5ecf543912752081439a8dd716542cb0196fbd58722fdeed7e"} Dec 02 14:46:59 crc kubenswrapper[4900]: I1202 14:46:59.184026 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jsrjk" podStartSLOduration=3.683120502 podStartE2EDuration="6.183999923s" podCreationTimestamp="2025-12-02 14:46:53 +0000 UTC" firstStartedPulling="2025-12-02 14:46:56.136599532 +0000 UTC m=+3861.552413393" lastFinishedPulling="2025-12-02 14:46:58.637478923 +0000 UTC m=+3864.053292814" observedRunningTime="2025-12-02 14:46:59.178434267 +0000 UTC m=+3864.594248128" watchObservedRunningTime="2025-12-02 14:46:59.183999923 +0000 UTC m=+3864.599813794" Dec 02 14:47:02 crc kubenswrapper[4900]: I1202 14:47:02.911272 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:47:02 crc kubenswrapper[4900]: E1202 14:47:02.912380 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:47:04 crc kubenswrapper[4900]: I1202 14:47:04.365147 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:47:04 crc kubenswrapper[4900]: I1202 14:47:04.365541 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:47:04 crc kubenswrapper[4900]: I1202 14:47:04.421442 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:47:05 crc kubenswrapper[4900]: I1202 14:47:05.253805 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:47:05 crc kubenswrapper[4900]: I1202 14:47:05.300816 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jsrjk"] Dec 02 14:47:07 crc kubenswrapper[4900]: I1202 14:47:07.218430 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jsrjk" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="registry-server" containerID="cri-o://5f379c5a95eefe5ecf543912752081439a8dd716542cb0196fbd58722fdeed7e" gracePeriod=2 Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.229146 4900 generic.go:334] "Generic (PLEG): container finished" podID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerID="5f379c5a95eefe5ecf543912752081439a8dd716542cb0196fbd58722fdeed7e" exitCode=0 Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.229273 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsrjk" event={"ID":"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b","Type":"ContainerDied","Data":"5f379c5a95eefe5ecf543912752081439a8dd716542cb0196fbd58722fdeed7e"} Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.684740 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.733183 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-utilities\") pod \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.733269 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-catalog-content\") pod \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.733328 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77dzw\" (UniqueName: \"kubernetes.io/projected/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-kube-api-access-77dzw\") pod \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\" (UID: \"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b\") " Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.735503 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-utilities" (OuterVolumeSpecName: "utilities") pod "f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" (UID: "f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.739878 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-kube-api-access-77dzw" (OuterVolumeSpecName: "kube-api-access-77dzw") pod "f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" (UID: "f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b"). InnerVolumeSpecName "kube-api-access-77dzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.793119 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" (UID: "f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.835000 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77dzw\" (UniqueName: \"kubernetes.io/projected/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-kube-api-access-77dzw\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.835047 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:08 crc kubenswrapper[4900]: I1202 14:47:08.835063 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.238455 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jsrjk" event={"ID":"f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b","Type":"ContainerDied","Data":"2e7ed41bc369d90aa740179a877e291797a1efed335dd952679c39f93a036b5f"} Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.238523 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jsrjk" Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.238787 4900 scope.go:117] "RemoveContainer" containerID="5f379c5a95eefe5ecf543912752081439a8dd716542cb0196fbd58722fdeed7e" Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.262294 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jsrjk"] Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.264488 4900 scope.go:117] "RemoveContainer" containerID="3f4d4244bae4a2bb5c85a2520d79cc2310d7e2eb9977da5b57ffe0163b93d4bf" Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.279366 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jsrjk"] Dec 02 14:47:09 crc kubenswrapper[4900]: I1202 14:47:09.281769 4900 scope.go:117] "RemoveContainer" containerID="5ca71db0a802e4fb6878dd912db0fbad8a3e1cbbb1e9732da23f163c2e353f56" Dec 02 14:47:10 crc kubenswrapper[4900]: I1202 14:47:10.917803 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" path="/var/lib/kubelet/pods/f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b/volumes" Dec 02 14:47:16 crc kubenswrapper[4900]: I1202 14:47:16.909456 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:47:16 crc kubenswrapper[4900]: E1202 14:47:16.910183 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:47:29 crc kubenswrapper[4900]: I1202 14:47:29.910159 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:47:29 crc kubenswrapper[4900]: E1202 14:47:29.911266 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:47:43 crc kubenswrapper[4900]: I1202 14:47:43.912811 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:47:43 crc kubenswrapper[4900]: E1202 14:47:43.914078 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:47:54 crc kubenswrapper[4900]: I1202 14:47:54.918624 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:47:54 crc kubenswrapper[4900]: E1202 14:47:54.920088 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:48:09 crc kubenswrapper[4900]: I1202 14:48:09.910057 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:48:09 crc kubenswrapper[4900]: E1202 14:48:09.910954 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.669457 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kh7zv"] Dec 02 14:48:10 crc kubenswrapper[4900]: E1202 14:48:10.670083 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="registry-server" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.670103 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="registry-server" Dec 02 14:48:10 crc kubenswrapper[4900]: E1202 14:48:10.670124 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="extract-utilities" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.670133 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="extract-utilities" Dec 02 14:48:10 crc kubenswrapper[4900]: E1202 14:48:10.670158 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="extract-content" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.670167 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="extract-content" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.670353 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2676711-cc5d-4e6e-92d5-1bd7e4f5d55b" containerName="registry-server" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.671590 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.676549 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kh7zv"] Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.863851 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-catalog-content\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.863903 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhs6z\" (UniqueName: \"kubernetes.io/projected/30a6f9e2-a099-4855-bc10-dee02a32a4f4-kube-api-access-xhs6z\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.864162 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-utilities\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.965818 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-catalog-content\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.965896 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhs6z\" (UniqueName: \"kubernetes.io/projected/30a6f9e2-a099-4855-bc10-dee02a32a4f4-kube-api-access-xhs6z\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.965956 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-utilities\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.966465 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-utilities\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.966467 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-catalog-content\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:10 crc kubenswrapper[4900]: I1202 14:48:10.998851 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhs6z\" (UniqueName: \"kubernetes.io/projected/30a6f9e2-a099-4855-bc10-dee02a32a4f4-kube-api-access-xhs6z\") pod \"redhat-operators-kh7zv\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:11 crc kubenswrapper[4900]: I1202 14:48:11.007469 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:11 crc kubenswrapper[4900]: I1202 14:48:11.499862 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kh7zv"] Dec 02 14:48:11 crc kubenswrapper[4900]: I1202 14:48:11.785194 4900 generic.go:334] "Generic (PLEG): container finished" podID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerID="e3b7791de81ae46f98c858533c635204a4f6272efb9bc4ca8ac2a8b219f0266b" exitCode=0 Dec 02 14:48:11 crc kubenswrapper[4900]: I1202 14:48:11.785275 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh7zv" event={"ID":"30a6f9e2-a099-4855-bc10-dee02a32a4f4","Type":"ContainerDied","Data":"e3b7791de81ae46f98c858533c635204a4f6272efb9bc4ca8ac2a8b219f0266b"} Dec 02 14:48:11 crc kubenswrapper[4900]: I1202 14:48:11.785524 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh7zv" event={"ID":"30a6f9e2-a099-4855-bc10-dee02a32a4f4","Type":"ContainerStarted","Data":"4695a4f132190dc3216d273a7cbc45750e0cab866d2856f9f912f7e29e03a872"} Dec 02 14:48:13 crc kubenswrapper[4900]: I1202 14:48:13.802691 4900 generic.go:334] "Generic (PLEG): container finished" podID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerID="858fcb411c80eeb753d0b97daa9dd2cb1390e9b27f3e26cf1ef8b6aa5e0610e3" exitCode=0 Dec 02 14:48:13 crc kubenswrapper[4900]: I1202 14:48:13.802744 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh7zv" event={"ID":"30a6f9e2-a099-4855-bc10-dee02a32a4f4","Type":"ContainerDied","Data":"858fcb411c80eeb753d0b97daa9dd2cb1390e9b27f3e26cf1ef8b6aa5e0610e3"} Dec 02 14:48:14 crc kubenswrapper[4900]: I1202 14:48:14.816280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh7zv" event={"ID":"30a6f9e2-a099-4855-bc10-dee02a32a4f4","Type":"ContainerStarted","Data":"dcc78bbec027526dec2fba0b09e57c49435862ca90f848d07d4626cd7f9b6b8b"} Dec 02 14:48:14 crc kubenswrapper[4900]: I1202 14:48:14.844907 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kh7zv" podStartSLOduration=2.370874001 podStartE2EDuration="4.84489022s" podCreationTimestamp="2025-12-02 14:48:10 +0000 UTC" firstStartedPulling="2025-12-02 14:48:11.78749534 +0000 UTC m=+3937.203309191" lastFinishedPulling="2025-12-02 14:48:14.261511519 +0000 UTC m=+3939.677325410" observedRunningTime="2025-12-02 14:48:14.840367183 +0000 UTC m=+3940.256181044" watchObservedRunningTime="2025-12-02 14:48:14.84489022 +0000 UTC m=+3940.260704071" Dec 02 14:48:21 crc kubenswrapper[4900]: I1202 14:48:21.008296 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:21 crc kubenswrapper[4900]: I1202 14:48:21.008863 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:21 crc kubenswrapper[4900]: I1202 14:48:21.056916 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:21 crc kubenswrapper[4900]: I1202 14:48:21.909506 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:48:21 crc kubenswrapper[4900]: I1202 14:48:21.932323 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:21 crc kubenswrapper[4900]: I1202 14:48:21.990914 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kh7zv"] Dec 02 14:48:23 crc kubenswrapper[4900]: I1202 14:48:23.900618 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"eb486ba9ab334ae7b29a776d5bd87177a06c3e28f4b3e56e6eb474a7554432d2"} Dec 02 14:48:23 crc kubenswrapper[4900]: I1202 14:48:23.900824 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kh7zv" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="registry-server" containerID="cri-o://dcc78bbec027526dec2fba0b09e57c49435862ca90f848d07d4626cd7f9b6b8b" gracePeriod=2 Dec 02 14:48:24 crc kubenswrapper[4900]: I1202 14:48:24.911402 4900 generic.go:334] "Generic (PLEG): container finished" podID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerID="dcc78bbec027526dec2fba0b09e57c49435862ca90f848d07d4626cd7f9b6b8b" exitCode=0 Dec 02 14:48:24 crc kubenswrapper[4900]: I1202 14:48:24.918890 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh7zv" event={"ID":"30a6f9e2-a099-4855-bc10-dee02a32a4f4","Type":"ContainerDied","Data":"dcc78bbec027526dec2fba0b09e57c49435862ca90f848d07d4626cd7f9b6b8b"} Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.354926 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.477775 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-catalog-content\") pod \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.477888 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhs6z\" (UniqueName: \"kubernetes.io/projected/30a6f9e2-a099-4855-bc10-dee02a32a4f4-kube-api-access-xhs6z\") pod \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.477939 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-utilities\") pod \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\" (UID: \"30a6f9e2-a099-4855-bc10-dee02a32a4f4\") " Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.478877 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-utilities" (OuterVolumeSpecName: "utilities") pod "30a6f9e2-a099-4855-bc10-dee02a32a4f4" (UID: "30a6f9e2-a099-4855-bc10-dee02a32a4f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.502500 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a6f9e2-a099-4855-bc10-dee02a32a4f4-kube-api-access-xhs6z" (OuterVolumeSpecName: "kube-api-access-xhs6z") pod "30a6f9e2-a099-4855-bc10-dee02a32a4f4" (UID: "30a6f9e2-a099-4855-bc10-dee02a32a4f4"). InnerVolumeSpecName "kube-api-access-xhs6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.579252 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.579611 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhs6z\" (UniqueName: \"kubernetes.io/projected/30a6f9e2-a099-4855-bc10-dee02a32a4f4-kube-api-access-xhs6z\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.591602 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30a6f9e2-a099-4855-bc10-dee02a32a4f4" (UID: "30a6f9e2-a099-4855-bc10-dee02a32a4f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.681389 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30a6f9e2-a099-4855-bc10-dee02a32a4f4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.925878 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh7zv" event={"ID":"30a6f9e2-a099-4855-bc10-dee02a32a4f4","Type":"ContainerDied","Data":"4695a4f132190dc3216d273a7cbc45750e0cab866d2856f9f912f7e29e03a872"} Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.925934 4900 scope.go:117] "RemoveContainer" containerID="dcc78bbec027526dec2fba0b09e57c49435862ca90f848d07d4626cd7f9b6b8b" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.926395 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh7zv" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.951723 4900 scope.go:117] "RemoveContainer" containerID="858fcb411c80eeb753d0b97daa9dd2cb1390e9b27f3e26cf1ef8b6aa5e0610e3" Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.966303 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kh7zv"] Dec 02 14:48:25 crc kubenswrapper[4900]: I1202 14:48:25.971586 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kh7zv"] Dec 02 14:48:26 crc kubenswrapper[4900]: I1202 14:48:26.167991 4900 scope.go:117] "RemoveContainer" containerID="e3b7791de81ae46f98c858533c635204a4f6272efb9bc4ca8ac2a8b219f0266b" Dec 02 14:48:26 crc kubenswrapper[4900]: I1202 14:48:26.920886 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" path="/var/lib/kubelet/pods/30a6f9e2-a099-4855-bc10-dee02a32a4f4/volumes" Dec 02 14:50:45 crc kubenswrapper[4900]: I1202 14:50:45.116960 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:50:45 crc kubenswrapper[4900]: I1202 14:50:45.117534 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:51:15 crc kubenswrapper[4900]: I1202 14:51:15.116167 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:51:15 crc kubenswrapper[4900]: I1202 14:51:15.116771 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:51:45 crc kubenswrapper[4900]: I1202 14:51:45.116537 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:51:45 crc kubenswrapper[4900]: I1202 14:51:45.117042 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:51:45 crc kubenswrapper[4900]: I1202 14:51:45.117094 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:51:45 crc kubenswrapper[4900]: I1202 14:51:45.117743 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb486ba9ab334ae7b29a776d5bd87177a06c3e28f4b3e56e6eb474a7554432d2"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:51:45 crc kubenswrapper[4900]: I1202 14:51:45.117800 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://eb486ba9ab334ae7b29a776d5bd87177a06c3e28f4b3e56e6eb474a7554432d2" gracePeriod=600 Dec 02 14:51:46 crc kubenswrapper[4900]: I1202 14:51:46.042402 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="eb486ba9ab334ae7b29a776d5bd87177a06c3e28f4b3e56e6eb474a7554432d2" exitCode=0 Dec 02 14:51:46 crc kubenswrapper[4900]: I1202 14:51:46.042543 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"eb486ba9ab334ae7b29a776d5bd87177a06c3e28f4b3e56e6eb474a7554432d2"} Dec 02 14:51:46 crc kubenswrapper[4900]: I1202 14:51:46.042853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d"} Dec 02 14:51:46 crc kubenswrapper[4900]: I1202 14:51:46.042883 4900 scope.go:117] "RemoveContainer" containerID="d6980f15af07a9b7780b0a7390834f3accafecf0487725cd726238e1fddde421" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.245270 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhhxr"] Dec 02 14:52:37 crc kubenswrapper[4900]: E1202 14:52:37.246223 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="registry-server" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.246254 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="registry-server" Dec 02 14:52:37 crc kubenswrapper[4900]: E1202 14:52:37.246273 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="extract-utilities" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.246286 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="extract-utilities" Dec 02 14:52:37 crc kubenswrapper[4900]: E1202 14:52:37.246311 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="extract-content" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.246321 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="extract-content" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.246572 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a6f9e2-a099-4855-bc10-dee02a32a4f4" containerName="registry-server" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.248040 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.259882 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhhxr"] Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.360722 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtm2r\" (UniqueName: \"kubernetes.io/projected/928753e8-8dd9-4b9e-9784-6bf694bf3f65-kube-api-access-jtm2r\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.360800 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-utilities\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.360937 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-catalog-content\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.462833 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-utilities\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.462886 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-catalog-content\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.462955 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtm2r\" (UniqueName: \"kubernetes.io/projected/928753e8-8dd9-4b9e-9784-6bf694bf3f65-kube-api-access-jtm2r\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.463397 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-utilities\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.463533 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-catalog-content\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.487046 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtm2r\" (UniqueName: \"kubernetes.io/projected/928753e8-8dd9-4b9e-9784-6bf694bf3f65-kube-api-access-jtm2r\") pod \"certified-operators-mhhxr\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.572834 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:37 crc kubenswrapper[4900]: I1202 14:52:37.926028 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhhxr"] Dec 02 14:52:38 crc kubenswrapper[4900]: I1202 14:52:38.480437 4900 generic.go:334] "Generic (PLEG): container finished" podID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerID="499cbe80ad4ff2ff90e7b64edd8d24d276032d6418a85cd3287aabb971d1f4aa" exitCode=0 Dec 02 14:52:38 crc kubenswrapper[4900]: I1202 14:52:38.480759 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerDied","Data":"499cbe80ad4ff2ff90e7b64edd8d24d276032d6418a85cd3287aabb971d1f4aa"} Dec 02 14:52:38 crc kubenswrapper[4900]: I1202 14:52:38.480794 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerStarted","Data":"eeca7465f5f35a2bdf1cda61c1c5a8183fc17b653f95b34de3bd991f0ffe62a2"} Dec 02 14:52:38 crc kubenswrapper[4900]: I1202 14:52:38.482924 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:52:39 crc kubenswrapper[4900]: I1202 14:52:39.490543 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerStarted","Data":"e4adc56a93cad30babda968c7e69cb96c5d94daba7acf1a35f71e9ce4ffb5b5c"} Dec 02 14:52:40 crc kubenswrapper[4900]: I1202 14:52:40.499345 4900 generic.go:334] "Generic (PLEG): container finished" podID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerID="e4adc56a93cad30babda968c7e69cb96c5d94daba7acf1a35f71e9ce4ffb5b5c" exitCode=0 Dec 02 14:52:40 crc kubenswrapper[4900]: I1202 14:52:40.499381 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerDied","Data":"e4adc56a93cad30babda968c7e69cb96c5d94daba7acf1a35f71e9ce4ffb5b5c"} Dec 02 14:52:41 crc kubenswrapper[4900]: I1202 14:52:41.506896 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerStarted","Data":"1686854fab952d2e74b06a0f5ab08bba0f9c4116a0456a585b1a029f9288e081"} Dec 02 14:52:47 crc kubenswrapper[4900]: I1202 14:52:47.597022 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:47 crc kubenswrapper[4900]: I1202 14:52:47.597911 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:47 crc kubenswrapper[4900]: I1202 14:52:47.647422 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:47 crc kubenswrapper[4900]: I1202 14:52:47.665853 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhhxr" podStartSLOduration=8.156348267 podStartE2EDuration="10.665828591s" podCreationTimestamp="2025-12-02 14:52:37 +0000 UTC" firstStartedPulling="2025-12-02 14:52:38.482660445 +0000 UTC m=+4203.898474296" lastFinishedPulling="2025-12-02 14:52:40.992140769 +0000 UTC m=+4206.407954620" observedRunningTime="2025-12-02 14:52:41.535010979 +0000 UTC m=+4206.950824840" watchObservedRunningTime="2025-12-02 14:52:47.665828591 +0000 UTC m=+4213.081642442" Dec 02 14:52:48 crc kubenswrapper[4900]: I1202 14:52:48.604654 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:48 crc kubenswrapper[4900]: I1202 14:52:48.649686 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhhxr"] Dec 02 14:52:50 crc kubenswrapper[4900]: I1202 14:52:50.575279 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhhxr" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="registry-server" containerID="cri-o://1686854fab952d2e74b06a0f5ab08bba0f9c4116a0456a585b1a029f9288e081" gracePeriod=2 Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.611435 4900 generic.go:334] "Generic (PLEG): container finished" podID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerID="1686854fab952d2e74b06a0f5ab08bba0f9c4116a0456a585b1a029f9288e081" exitCode=0 Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.611478 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerDied","Data":"1686854fab952d2e74b06a0f5ab08bba0f9c4116a0456a585b1a029f9288e081"} Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.705790 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.826562 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-catalog-content\") pod \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.826713 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtm2r\" (UniqueName: \"kubernetes.io/projected/928753e8-8dd9-4b9e-9784-6bf694bf3f65-kube-api-access-jtm2r\") pod \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.826853 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-utilities\") pod \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\" (UID: \"928753e8-8dd9-4b9e-9784-6bf694bf3f65\") " Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.829382 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-utilities" (OuterVolumeSpecName: "utilities") pod "928753e8-8dd9-4b9e-9784-6bf694bf3f65" (UID: "928753e8-8dd9-4b9e-9784-6bf694bf3f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.832698 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.850637 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928753e8-8dd9-4b9e-9784-6bf694bf3f65-kube-api-access-jtm2r" (OuterVolumeSpecName: "kube-api-access-jtm2r") pod "928753e8-8dd9-4b9e-9784-6bf694bf3f65" (UID: "928753e8-8dd9-4b9e-9784-6bf694bf3f65"). InnerVolumeSpecName "kube-api-access-jtm2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.891998 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "928753e8-8dd9-4b9e-9784-6bf694bf3f65" (UID: "928753e8-8dd9-4b9e-9784-6bf694bf3f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.933699 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928753e8-8dd9-4b9e-9784-6bf694bf3f65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:52:54 crc kubenswrapper[4900]: I1202 14:52:54.933732 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtm2r\" (UniqueName: \"kubernetes.io/projected/928753e8-8dd9-4b9e-9784-6bf694bf3f65-kube-api-access-jtm2r\") on node \"crc\" DevicePath \"\"" Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.621614 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhhxr" event={"ID":"928753e8-8dd9-4b9e-9784-6bf694bf3f65","Type":"ContainerDied","Data":"eeca7465f5f35a2bdf1cda61c1c5a8183fc17b653f95b34de3bd991f0ffe62a2"} Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.621748 4900 scope.go:117] "RemoveContainer" containerID="1686854fab952d2e74b06a0f5ab08bba0f9c4116a0456a585b1a029f9288e081" Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.621815 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhhxr" Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.652392 4900 scope.go:117] "RemoveContainer" containerID="e4adc56a93cad30babda968c7e69cb96c5d94daba7acf1a35f71e9ce4ffb5b5c" Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.664811 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhhxr"] Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.676301 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhhxr"] Dec 02 14:52:55 crc kubenswrapper[4900]: I1202 14:52:55.681337 4900 scope.go:117] "RemoveContainer" containerID="499cbe80ad4ff2ff90e7b64edd8d24d276032d6418a85cd3287aabb971d1f4aa" Dec 02 14:52:56 crc kubenswrapper[4900]: I1202 14:52:56.923931 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" path="/var/lib/kubelet/pods/928753e8-8dd9-4b9e-9784-6bf694bf3f65/volumes" Dec 02 14:53:45 crc kubenswrapper[4900]: I1202 14:53:45.117283 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:53:45 crc kubenswrapper[4900]: I1202 14:53:45.117870 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:54:15 crc kubenswrapper[4900]: I1202 14:54:15.117199 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:54:15 crc kubenswrapper[4900]: I1202 14:54:15.117683 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.117338 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.118241 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.118306 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.119394 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.119534 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" gracePeriod=600 Dec 02 14:54:45 crc kubenswrapper[4900]: E1202 14:54:45.245301 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.581349 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" exitCode=0 Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.581461 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d"} Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.581575 4900 scope.go:117] "RemoveContainer" containerID="eb486ba9ab334ae7b29a776d5bd87177a06c3e28f4b3e56e6eb474a7554432d2" Dec 02 14:54:45 crc kubenswrapper[4900]: I1202 14:54:45.582385 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:54:45 crc kubenswrapper[4900]: E1202 14:54:45.582826 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.694693 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zjztp"] Dec 02 14:54:53 crc kubenswrapper[4900]: E1202 14:54:53.695605 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="extract-utilities" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.695621 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="extract-utilities" Dec 02 14:54:53 crc kubenswrapper[4900]: E1202 14:54:53.695664 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="registry-server" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.695673 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="registry-server" Dec 02 14:54:53 crc kubenswrapper[4900]: E1202 14:54:53.695705 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="extract-content" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.695713 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="extract-content" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.695881 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="928753e8-8dd9-4b9e-9784-6bf694bf3f65" containerName="registry-server" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.697051 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.710578 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjztp"] Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.817536 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-utilities\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.817635 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-catalog-content\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.817739 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfgl\" (UniqueName: \"kubernetes.io/projected/911c2a14-5b1f-4f54-87c2-9fedc7a35016-kube-api-access-zrfgl\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.919023 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfgl\" (UniqueName: \"kubernetes.io/projected/911c2a14-5b1f-4f54-87c2-9fedc7a35016-kube-api-access-zrfgl\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.919070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-utilities\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.919121 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-catalog-content\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.919547 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-utilities\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.919583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-catalog-content\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:53 crc kubenswrapper[4900]: I1202 14:54:53.942313 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfgl\" (UniqueName: \"kubernetes.io/projected/911c2a14-5b1f-4f54-87c2-9fedc7a35016-kube-api-access-zrfgl\") pod \"redhat-marketplace-zjztp\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:54 crc kubenswrapper[4900]: I1202 14:54:54.019986 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:54:54 crc kubenswrapper[4900]: I1202 14:54:54.259736 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjztp"] Dec 02 14:54:54 crc kubenswrapper[4900]: I1202 14:54:54.667346 4900 generic.go:334] "Generic (PLEG): container finished" podID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerID="6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de" exitCode=0 Dec 02 14:54:54 crc kubenswrapper[4900]: I1202 14:54:54.667560 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerDied","Data":"6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de"} Dec 02 14:54:54 crc kubenswrapper[4900]: I1202 14:54:54.667664 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerStarted","Data":"5c2c6f65d0eb20c3d1dc9fd010cb2d3d85d2b43f2d9282f329797371f1f1d703"} Dec 02 14:54:55 crc kubenswrapper[4900]: I1202 14:54:55.678699 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerStarted","Data":"a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d"} Dec 02 14:54:56 crc kubenswrapper[4900]: I1202 14:54:56.708314 4900 generic.go:334] "Generic (PLEG): container finished" podID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerID="a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d" exitCode=0 Dec 02 14:54:56 crc kubenswrapper[4900]: I1202 14:54:56.708626 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerDied","Data":"a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d"} Dec 02 14:54:56 crc kubenswrapper[4900]: I1202 14:54:56.910339 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:54:56 crc kubenswrapper[4900]: E1202 14:54:56.910543 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:54:58 crc kubenswrapper[4900]: I1202 14:54:58.725565 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerStarted","Data":"3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4"} Dec 02 14:54:58 crc kubenswrapper[4900]: I1202 14:54:58.764391 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zjztp" podStartSLOduration=2.822221476 podStartE2EDuration="5.764376487s" podCreationTimestamp="2025-12-02 14:54:53 +0000 UTC" firstStartedPulling="2025-12-02 14:54:54.668811637 +0000 UTC m=+4340.084625488" lastFinishedPulling="2025-12-02 14:54:57.610966638 +0000 UTC m=+4343.026780499" observedRunningTime="2025-12-02 14:54:58.754511748 +0000 UTC m=+4344.170325599" watchObservedRunningTime="2025-12-02 14:54:58.764376487 +0000 UTC m=+4344.180190338" Dec 02 14:55:04 crc kubenswrapper[4900]: I1202 14:55:04.021770 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:55:04 crc kubenswrapper[4900]: I1202 14:55:04.022360 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:55:04 crc kubenswrapper[4900]: I1202 14:55:04.089142 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:55:04 crc kubenswrapper[4900]: I1202 14:55:04.825920 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.067712 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjztp"] Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.068360 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zjztp" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="registry-server" containerID="cri-o://3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4" gracePeriod=2 Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.522201 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.608908 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrfgl\" (UniqueName: \"kubernetes.io/projected/911c2a14-5b1f-4f54-87c2-9fedc7a35016-kube-api-access-zrfgl\") pod \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.608988 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-catalog-content\") pod \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.609018 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-utilities\") pod \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\" (UID: \"911c2a14-5b1f-4f54-87c2-9fedc7a35016\") " Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.610252 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-utilities" (OuterVolumeSpecName: "utilities") pod "911c2a14-5b1f-4f54-87c2-9fedc7a35016" (UID: "911c2a14-5b1f-4f54-87c2-9fedc7a35016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.621794 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911c2a14-5b1f-4f54-87c2-9fedc7a35016-kube-api-access-zrfgl" (OuterVolumeSpecName: "kube-api-access-zrfgl") pod "911c2a14-5b1f-4f54-87c2-9fedc7a35016" (UID: "911c2a14-5b1f-4f54-87c2-9fedc7a35016"). InnerVolumeSpecName "kube-api-access-zrfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.629154 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "911c2a14-5b1f-4f54-87c2-9fedc7a35016" (UID: "911c2a14-5b1f-4f54-87c2-9fedc7a35016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.710443 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrfgl\" (UniqueName: \"kubernetes.io/projected/911c2a14-5b1f-4f54-87c2-9fedc7a35016-kube-api-access-zrfgl\") on node \"crc\" DevicePath \"\"" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.710475 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.710484 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911c2a14-5b1f-4f54-87c2-9fedc7a35016-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.796547 4900 generic.go:334] "Generic (PLEG): container finished" podID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerID="3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4" exitCode=0 Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.796615 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerDied","Data":"3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4"} Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.796628 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zjztp" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.796784 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zjztp" event={"ID":"911c2a14-5b1f-4f54-87c2-9fedc7a35016","Type":"ContainerDied","Data":"5c2c6f65d0eb20c3d1dc9fd010cb2d3d85d2b43f2d9282f329797371f1f1d703"} Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.796819 4900 scope.go:117] "RemoveContainer" containerID="3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.819373 4900 scope.go:117] "RemoveContainer" containerID="a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.834311 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjztp"] Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.841593 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zjztp"] Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.871689 4900 scope.go:117] "RemoveContainer" containerID="6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.889808 4900 scope.go:117] "RemoveContainer" containerID="3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4" Dec 02 14:55:07 crc kubenswrapper[4900]: E1202 14:55:07.890231 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4\": container with ID starting with 3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4 not found: ID does not exist" containerID="3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.890265 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4"} err="failed to get container status \"3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4\": rpc error: code = NotFound desc = could not find container \"3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4\": container with ID starting with 3ae9101582f45b7728804dadb02c14f832f3bbdc06d24c57a985c8b0d62c07e4 not found: ID does not exist" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.890286 4900 scope.go:117] "RemoveContainer" containerID="a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d" Dec 02 14:55:07 crc kubenswrapper[4900]: E1202 14:55:07.890599 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d\": container with ID starting with a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d not found: ID does not exist" containerID="a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.890628 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d"} err="failed to get container status \"a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d\": rpc error: code = NotFound desc = could not find container \"a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d\": container with ID starting with a75a3c7e808141fcd2010c4112d8f05b34fa56822f120af49d57ad76a71c342d not found: ID does not exist" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.890677 4900 scope.go:117] "RemoveContainer" containerID="6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de" Dec 02 14:55:07 crc kubenswrapper[4900]: E1202 14:55:07.890903 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de\": container with ID starting with 6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de not found: ID does not exist" containerID="6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de" Dec 02 14:55:07 crc kubenswrapper[4900]: I1202 14:55:07.890939 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de"} err="failed to get container status \"6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de\": rpc error: code = NotFound desc = could not find container \"6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de\": container with ID starting with 6f6fab94f2a5113b4109a2fe36ce94d09a9cc50c217734fe6eb6de87e4c0d2de not found: ID does not exist" Dec 02 14:55:08 crc kubenswrapper[4900]: I1202 14:55:08.920105 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" path="/var/lib/kubelet/pods/911c2a14-5b1f-4f54-87c2-9fedc7a35016/volumes" Dec 02 14:55:09 crc kubenswrapper[4900]: I1202 14:55:09.910515 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:55:09 crc kubenswrapper[4900]: E1202 14:55:09.910820 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:55:21 crc kubenswrapper[4900]: I1202 14:55:21.909957 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:55:21 crc kubenswrapper[4900]: E1202 14:55:21.910874 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:55:32 crc kubenswrapper[4900]: I1202 14:55:32.910431 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:55:32 crc kubenswrapper[4900]: E1202 14:55:32.912299 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:55:44 crc kubenswrapper[4900]: I1202 14:55:44.919021 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:55:44 crc kubenswrapper[4900]: E1202 14:55:44.920109 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:55:59 crc kubenswrapper[4900]: I1202 14:55:59.911156 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:55:59 crc kubenswrapper[4900]: E1202 14:55:59.912248 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:56:12 crc kubenswrapper[4900]: I1202 14:56:12.910003 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:56:12 crc kubenswrapper[4900]: E1202 14:56:12.910782 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:56:25 crc kubenswrapper[4900]: I1202 14:56:25.909559 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:56:25 crc kubenswrapper[4900]: E1202 14:56:25.910326 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:56:37 crc kubenswrapper[4900]: I1202 14:56:37.911734 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:56:37 crc kubenswrapper[4900]: E1202 14:56:37.912957 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:56:50 crc kubenswrapper[4900]: I1202 14:56:50.910078 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:56:50 crc kubenswrapper[4900]: E1202 14:56:50.910837 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.868089 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-57p75"] Dec 02 14:56:54 crc kubenswrapper[4900]: E1202 14:56:54.868809 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="extract-utilities" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.868833 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="extract-utilities" Dec 02 14:56:54 crc kubenswrapper[4900]: E1202 14:56:54.868869 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="registry-server" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.868880 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="registry-server" Dec 02 14:56:54 crc kubenswrapper[4900]: E1202 14:56:54.868898 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="extract-content" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.868908 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="extract-content" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.869137 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="911c2a14-5b1f-4f54-87c2-9fedc7a35016" containerName="registry-server" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.870506 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.879139 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-57p75"] Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.933330 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-catalog-content\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.933580 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6c4\" (UniqueName: \"kubernetes.io/projected/f8714343-a604-4b5e-8a2f-6c1c453e72eb-kube-api-access-sv6c4\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:54 crc kubenswrapper[4900]: I1202 14:56:54.933670 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-utilities\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.034961 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-catalog-content\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.035071 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6c4\" (UniqueName: \"kubernetes.io/projected/f8714343-a604-4b5e-8a2f-6c1c453e72eb-kube-api-access-sv6c4\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.035092 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-utilities\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.035497 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-catalog-content\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.035544 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-utilities\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.053486 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6c4\" (UniqueName: \"kubernetes.io/projected/f8714343-a604-4b5e-8a2f-6c1c453e72eb-kube-api-access-sv6c4\") pod \"community-operators-57p75\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.212917 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:56:55 crc kubenswrapper[4900]: I1202 14:56:55.724500 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-57p75"] Dec 02 14:56:56 crc kubenswrapper[4900]: I1202 14:56:56.731360 4900 generic.go:334] "Generic (PLEG): container finished" podID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerID="eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c" exitCode=0 Dec 02 14:56:56 crc kubenswrapper[4900]: I1202 14:56:56.731613 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57p75" event={"ID":"f8714343-a604-4b5e-8a2f-6c1c453e72eb","Type":"ContainerDied","Data":"eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c"} Dec 02 14:56:56 crc kubenswrapper[4900]: I1202 14:56:56.731761 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57p75" event={"ID":"f8714343-a604-4b5e-8a2f-6c1c453e72eb","Type":"ContainerStarted","Data":"28c5f0b0d54fa0692a8f3eebebf25ea7ad5a911922e45b5576f5e2d489da905f"} Dec 02 14:56:58 crc kubenswrapper[4900]: I1202 14:56:58.755481 4900 generic.go:334] "Generic (PLEG): container finished" podID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerID="d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc" exitCode=0 Dec 02 14:56:58 crc kubenswrapper[4900]: I1202 14:56:58.755612 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57p75" event={"ID":"f8714343-a604-4b5e-8a2f-6c1c453e72eb","Type":"ContainerDied","Data":"d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc"} Dec 02 14:57:00 crc kubenswrapper[4900]: I1202 14:57:00.781316 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57p75" event={"ID":"f8714343-a604-4b5e-8a2f-6c1c453e72eb","Type":"ContainerStarted","Data":"902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd"} Dec 02 14:57:00 crc kubenswrapper[4900]: I1202 14:57:00.835616 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-57p75" podStartSLOduration=4.022020801 podStartE2EDuration="6.835582848s" podCreationTimestamp="2025-12-02 14:56:54 +0000 UTC" firstStartedPulling="2025-12-02 14:56:56.733189983 +0000 UTC m=+4462.149003874" lastFinishedPulling="2025-12-02 14:56:59.54675203 +0000 UTC m=+4464.962565921" observedRunningTime="2025-12-02 14:57:00.808313225 +0000 UTC m=+4466.224127116" watchObservedRunningTime="2025-12-02 14:57:00.835582848 +0000 UTC m=+4466.251396739" Dec 02 14:57:03 crc kubenswrapper[4900]: I1202 14:57:03.910635 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:57:03 crc kubenswrapper[4900]: E1202 14:57:03.912716 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:57:05 crc kubenswrapper[4900]: I1202 14:57:05.213192 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:57:05 crc kubenswrapper[4900]: I1202 14:57:05.213501 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:57:05 crc kubenswrapper[4900]: I1202 14:57:05.273115 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:57:05 crc kubenswrapper[4900]: I1202 14:57:05.873029 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:57:05 crc kubenswrapper[4900]: I1202 14:57:05.920991 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-57p75"] Dec 02 14:57:07 crc kubenswrapper[4900]: I1202 14:57:07.832447 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-57p75" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="registry-server" containerID="cri-o://902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd" gracePeriod=2 Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.264083 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.359465 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-catalog-content\") pod \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.359568 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv6c4\" (UniqueName: \"kubernetes.io/projected/f8714343-a604-4b5e-8a2f-6c1c453e72eb-kube-api-access-sv6c4\") pod \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.359775 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-utilities\") pod \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\" (UID: \"f8714343-a604-4b5e-8a2f-6c1c453e72eb\") " Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.360460 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-utilities" (OuterVolumeSpecName: "utilities") pod "f8714343-a604-4b5e-8a2f-6c1c453e72eb" (UID: "f8714343-a604-4b5e-8a2f-6c1c453e72eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.366300 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8714343-a604-4b5e-8a2f-6c1c453e72eb-kube-api-access-sv6c4" (OuterVolumeSpecName: "kube-api-access-sv6c4") pod "f8714343-a604-4b5e-8a2f-6c1c453e72eb" (UID: "f8714343-a604-4b5e-8a2f-6c1c453e72eb"). InnerVolumeSpecName "kube-api-access-sv6c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.410330 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8714343-a604-4b5e-8a2f-6c1c453e72eb" (UID: "f8714343-a604-4b5e-8a2f-6c1c453e72eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.461371 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.461408 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8714343-a604-4b5e-8a2f-6c1c453e72eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.461419 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv6c4\" (UniqueName: \"kubernetes.io/projected/f8714343-a604-4b5e-8a2f-6c1c453e72eb-kube-api-access-sv6c4\") on node \"crc\" DevicePath \"\"" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.846747 4900 generic.go:334] "Generic (PLEG): container finished" podID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerID="902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd" exitCode=0 Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.846794 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57p75" event={"ID":"f8714343-a604-4b5e-8a2f-6c1c453e72eb","Type":"ContainerDied","Data":"902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd"} Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.846826 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57p75" event={"ID":"f8714343-a604-4b5e-8a2f-6c1c453e72eb","Type":"ContainerDied","Data":"28c5f0b0d54fa0692a8f3eebebf25ea7ad5a911922e45b5576f5e2d489da905f"} Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.846842 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57p75" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.846844 4900 scope.go:117] "RemoveContainer" containerID="902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.888127 4900 scope.go:117] "RemoveContainer" containerID="d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.890187 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-57p75"] Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.895847 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-57p75"] Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.908754 4900 scope.go:117] "RemoveContainer" containerID="eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.919478 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" path="/var/lib/kubelet/pods/f8714343-a604-4b5e-8a2f-6c1c453e72eb/volumes" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.925639 4900 scope.go:117] "RemoveContainer" containerID="902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd" Dec 02 14:57:08 crc kubenswrapper[4900]: E1202 14:57:08.926073 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd\": container with ID starting with 902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd not found: ID does not exist" containerID="902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.926110 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd"} err="failed to get container status \"902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd\": rpc error: code = NotFound desc = could not find container \"902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd\": container with ID starting with 902f308b169d4d83054250c7cc81bca780e1e7885f03d4dad97806ef9bace5fd not found: ID does not exist" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.926156 4900 scope.go:117] "RemoveContainer" containerID="d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc" Dec 02 14:57:08 crc kubenswrapper[4900]: E1202 14:57:08.926404 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc\": container with ID starting with d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc not found: ID does not exist" containerID="d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.926433 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc"} err="failed to get container status \"d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc\": rpc error: code = NotFound desc = could not find container \"d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc\": container with ID starting with d9925b8d5a03a39ef7fda5db0fe71ed2b96472bffb644f0adafc6f1ca46ce0dc not found: ID does not exist" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.926450 4900 scope.go:117] "RemoveContainer" containerID="eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c" Dec 02 14:57:08 crc kubenswrapper[4900]: E1202 14:57:08.926823 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c\": container with ID starting with eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c not found: ID does not exist" containerID="eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c" Dec 02 14:57:08 crc kubenswrapper[4900]: I1202 14:57:08.926854 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c"} err="failed to get container status \"eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c\": rpc error: code = NotFound desc = could not find container \"eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c\": container with ID starting with eab647d2f8ceb06d6c9a5b757c5bbfca4f992261211f61b39169d746c1a26d3c not found: ID does not exist" Dec 02 14:57:15 crc kubenswrapper[4900]: I1202 14:57:15.910971 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:57:15 crc kubenswrapper[4900]: E1202 14:57:15.912248 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:57:27 crc kubenswrapper[4900]: I1202 14:57:27.911318 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:57:27 crc kubenswrapper[4900]: E1202 14:57:27.913048 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:57:38 crc kubenswrapper[4900]: I1202 14:57:38.910095 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:57:38 crc kubenswrapper[4900]: E1202 14:57:38.910895 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:57:53 crc kubenswrapper[4900]: I1202 14:57:53.910077 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:57:53 crc kubenswrapper[4900]: E1202 14:57:53.910763 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:58:08 crc kubenswrapper[4900]: I1202 14:58:08.909675 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:58:08 crc kubenswrapper[4900]: E1202 14:58:08.910271 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:58:20 crc kubenswrapper[4900]: I1202 14:58:20.911085 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:58:20 crc kubenswrapper[4900]: E1202 14:58:20.912107 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:58:27 crc kubenswrapper[4900]: I1202 14:58:27.889222 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-lrx7m"] Dec 02 14:58:27 crc kubenswrapper[4900]: I1202 14:58:27.895957 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-lrx7m"] Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.012817 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kj5rq"] Dec 02 14:58:28 crc kubenswrapper[4900]: E1202 14:58:28.013301 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="extract-content" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.013369 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="extract-content" Dec 02 14:58:28 crc kubenswrapper[4900]: E1202 14:58:28.013446 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="extract-utilities" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.013502 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="extract-utilities" Dec 02 14:58:28 crc kubenswrapper[4900]: E1202 14:58:28.013569 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="registry-server" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.013626 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="registry-server" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.014487 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8714343-a604-4b5e-8a2f-6c1c453e72eb" containerName="registry-server" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.015155 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.019178 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.019550 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.020027 4900 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-p768c" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.020505 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.028929 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kj5rq"] Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.148325 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fac74a75-8462-4448-b3e9-d4f19978914a-crc-storage\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.148686 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fac74a75-8462-4448-b3e9-d4f19978914a-node-mnt\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.148807 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjvzl\" (UniqueName: \"kubernetes.io/projected/fac74a75-8462-4448-b3e9-d4f19978914a-kube-api-access-kjvzl\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.250387 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fac74a75-8462-4448-b3e9-d4f19978914a-crc-storage\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.250457 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fac74a75-8462-4448-b3e9-d4f19978914a-node-mnt\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.250494 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjvzl\" (UniqueName: \"kubernetes.io/projected/fac74a75-8462-4448-b3e9-d4f19978914a-kube-api-access-kjvzl\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.250865 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fac74a75-8462-4448-b3e9-d4f19978914a-node-mnt\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.251247 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fac74a75-8462-4448-b3e9-d4f19978914a-crc-storage\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.267460 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjvzl\" (UniqueName: \"kubernetes.io/projected/fac74a75-8462-4448-b3e9-d4f19978914a-kube-api-access-kjvzl\") pod \"crc-storage-crc-kj5rq\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.340095 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.844889 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kj5rq"] Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.845983 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:58:28 crc kubenswrapper[4900]: I1202 14:58:28.919928 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae" path="/var/lib/kubelet/pods/1bc1fffd-0ea2-4a9b-8902-b252da1ba5ae/volumes" Dec 02 14:58:29 crc kubenswrapper[4900]: I1202 14:58:29.555117 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kj5rq" event={"ID":"fac74a75-8462-4448-b3e9-d4f19978914a","Type":"ContainerStarted","Data":"55886fde65daf3a82b8e74b859de25c9862e5b448acf6177b1d87dc931845db1"} Dec 02 14:58:30 crc kubenswrapper[4900]: I1202 14:58:30.566046 4900 generic.go:334] "Generic (PLEG): container finished" podID="fac74a75-8462-4448-b3e9-d4f19978914a" containerID="73dc3ee87056981afdb2860c197420079ce2e31cf81b8d84f1a980efb08066cb" exitCode=0 Dec 02 14:58:30 crc kubenswrapper[4900]: I1202 14:58:30.566106 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kj5rq" event={"ID":"fac74a75-8462-4448-b3e9-d4f19978914a","Type":"ContainerDied","Data":"73dc3ee87056981afdb2860c197420079ce2e31cf81b8d84f1a980efb08066cb"} Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.040038 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.108964 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fac74a75-8462-4448-b3e9-d4f19978914a-node-mnt\") pod \"fac74a75-8462-4448-b3e9-d4f19978914a\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.109118 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fac74a75-8462-4448-b3e9-d4f19978914a-crc-storage\") pod \"fac74a75-8462-4448-b3e9-d4f19978914a\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.109122 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fac74a75-8462-4448-b3e9-d4f19978914a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "fac74a75-8462-4448-b3e9-d4f19978914a" (UID: "fac74a75-8462-4448-b3e9-d4f19978914a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.109287 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjvzl\" (UniqueName: \"kubernetes.io/projected/fac74a75-8462-4448-b3e9-d4f19978914a-kube-api-access-kjvzl\") pod \"fac74a75-8462-4448-b3e9-d4f19978914a\" (UID: \"fac74a75-8462-4448-b3e9-d4f19978914a\") " Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.109689 4900 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fac74a75-8462-4448-b3e9-d4f19978914a-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.115789 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac74a75-8462-4448-b3e9-d4f19978914a-kube-api-access-kjvzl" (OuterVolumeSpecName: "kube-api-access-kjvzl") pod "fac74a75-8462-4448-b3e9-d4f19978914a" (UID: "fac74a75-8462-4448-b3e9-d4f19978914a"). InnerVolumeSpecName "kube-api-access-kjvzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.144160 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac74a75-8462-4448-b3e9-d4f19978914a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "fac74a75-8462-4448-b3e9-d4f19978914a" (UID: "fac74a75-8462-4448-b3e9-d4f19978914a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.210794 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjvzl\" (UniqueName: \"kubernetes.io/projected/fac74a75-8462-4448-b3e9-d4f19978914a-kube-api-access-kjvzl\") on node \"crc\" DevicePath \"\"" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.210846 4900 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fac74a75-8462-4448-b3e9-d4f19978914a-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.593994 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kj5rq" event={"ID":"fac74a75-8462-4448-b3e9-d4f19978914a","Type":"ContainerDied","Data":"55886fde65daf3a82b8e74b859de25c9862e5b448acf6177b1d87dc931845db1"} Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.594035 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55886fde65daf3a82b8e74b859de25c9862e5b448acf6177b1d87dc931845db1" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.594112 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kj5rq" Dec 02 14:58:32 crc kubenswrapper[4900]: I1202 14:58:32.910226 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:58:32 crc kubenswrapper[4900]: E1202 14:58:32.911216 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.343474 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kj5rq"] Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.352112 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kj5rq"] Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.442581 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-z9ssw"] Dec 02 14:58:34 crc kubenswrapper[4900]: E1202 14:58:34.442890 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac74a75-8462-4448-b3e9-d4f19978914a" containerName="storage" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.442909 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac74a75-8462-4448-b3e9-d4f19978914a" containerName="storage" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.443037 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac74a75-8462-4448-b3e9-d4f19978914a" containerName="storage" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.443468 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.445423 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.446068 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.446069 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.446157 4900 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-p768c" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.482106 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z9ssw"] Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.548360 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e783ed7a-639d-408a-be5a-546eccfdc3ca-node-mnt\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.548445 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnwb\" (UniqueName: \"kubernetes.io/projected/e783ed7a-639d-408a-be5a-546eccfdc3ca-kube-api-access-fsnwb\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.548474 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e783ed7a-639d-408a-be5a-546eccfdc3ca-crc-storage\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.650025 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e783ed7a-639d-408a-be5a-546eccfdc3ca-node-mnt\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.650116 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsnwb\" (UniqueName: \"kubernetes.io/projected/e783ed7a-639d-408a-be5a-546eccfdc3ca-kube-api-access-fsnwb\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.650145 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e783ed7a-639d-408a-be5a-546eccfdc3ca-crc-storage\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.650567 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e783ed7a-639d-408a-be5a-546eccfdc3ca-node-mnt\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.651009 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e783ed7a-639d-408a-be5a-546eccfdc3ca-crc-storage\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.679774 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsnwb\" (UniqueName: \"kubernetes.io/projected/e783ed7a-639d-408a-be5a-546eccfdc3ca-kube-api-access-fsnwb\") pod \"crc-storage-crc-z9ssw\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.759044 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:34 crc kubenswrapper[4900]: I1202 14:58:34.922973 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac74a75-8462-4448-b3e9-d4f19978914a" path="/var/lib/kubelet/pods/fac74a75-8462-4448-b3e9-d4f19978914a/volumes" Dec 02 14:58:35 crc kubenswrapper[4900]: I1202 14:58:35.001213 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z9ssw"] Dec 02 14:58:35 crc kubenswrapper[4900]: W1202 14:58:35.004474 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode783ed7a_639d_408a_be5a_546eccfdc3ca.slice/crio-a425514918bb5442ff8b1c685c7d8b106fec80ed63a3770a2b62baafd628b628 WatchSource:0}: Error finding container a425514918bb5442ff8b1c685c7d8b106fec80ed63a3770a2b62baafd628b628: Status 404 returned error can't find the container with id a425514918bb5442ff8b1c685c7d8b106fec80ed63a3770a2b62baafd628b628 Dec 02 14:58:35 crc kubenswrapper[4900]: I1202 14:58:35.615339 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z9ssw" event={"ID":"e783ed7a-639d-408a-be5a-546eccfdc3ca","Type":"ContainerStarted","Data":"a425514918bb5442ff8b1c685c7d8b106fec80ed63a3770a2b62baafd628b628"} Dec 02 14:58:36 crc kubenswrapper[4900]: I1202 14:58:36.625115 4900 generic.go:334] "Generic (PLEG): container finished" podID="e783ed7a-639d-408a-be5a-546eccfdc3ca" containerID="3da683e18a39efd037e5ef1dd312e39e801350fd5d218833b5cce6708d31063c" exitCode=0 Dec 02 14:58:36 crc kubenswrapper[4900]: I1202 14:58:36.625221 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z9ssw" event={"ID":"e783ed7a-639d-408a-be5a-546eccfdc3ca","Type":"ContainerDied","Data":"3da683e18a39efd037e5ef1dd312e39e801350fd5d218833b5cce6708d31063c"} Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.010014 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.099262 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e783ed7a-639d-408a-be5a-546eccfdc3ca-node-mnt\") pod \"e783ed7a-639d-408a-be5a-546eccfdc3ca\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.099333 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e783ed7a-639d-408a-be5a-546eccfdc3ca-crc-storage\") pod \"e783ed7a-639d-408a-be5a-546eccfdc3ca\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.099407 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e783ed7a-639d-408a-be5a-546eccfdc3ca-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "e783ed7a-639d-408a-be5a-546eccfdc3ca" (UID: "e783ed7a-639d-408a-be5a-546eccfdc3ca"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.099440 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsnwb\" (UniqueName: \"kubernetes.io/projected/e783ed7a-639d-408a-be5a-546eccfdc3ca-kube-api-access-fsnwb\") pod \"e783ed7a-639d-408a-be5a-546eccfdc3ca\" (UID: \"e783ed7a-639d-408a-be5a-546eccfdc3ca\") " Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.099760 4900 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e783ed7a-639d-408a-be5a-546eccfdc3ca-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.105706 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e783ed7a-639d-408a-be5a-546eccfdc3ca-kube-api-access-fsnwb" (OuterVolumeSpecName: "kube-api-access-fsnwb") pod "e783ed7a-639d-408a-be5a-546eccfdc3ca" (UID: "e783ed7a-639d-408a-be5a-546eccfdc3ca"). InnerVolumeSpecName "kube-api-access-fsnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.136289 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e783ed7a-639d-408a-be5a-546eccfdc3ca-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "e783ed7a-639d-408a-be5a-546eccfdc3ca" (UID: "e783ed7a-639d-408a-be5a-546eccfdc3ca"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.200630 4900 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e783ed7a-639d-408a-be5a-546eccfdc3ca-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.200955 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsnwb\" (UniqueName: \"kubernetes.io/projected/e783ed7a-639d-408a-be5a-546eccfdc3ca-kube-api-access-fsnwb\") on node \"crc\" DevicePath \"\"" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.644920 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z9ssw" event={"ID":"e783ed7a-639d-408a-be5a-546eccfdc3ca","Type":"ContainerDied","Data":"a425514918bb5442ff8b1c685c7d8b106fec80ed63a3770a2b62baafd628b628"} Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.644978 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z9ssw" Dec 02 14:58:38 crc kubenswrapper[4900]: I1202 14:58:38.644983 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a425514918bb5442ff8b1c685c7d8b106fec80ed63a3770a2b62baafd628b628" Dec 02 14:58:47 crc kubenswrapper[4900]: I1202 14:58:47.911320 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:58:47 crc kubenswrapper[4900]: E1202 14:58:47.912528 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:58:58 crc kubenswrapper[4900]: I1202 14:58:58.910163 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:58:58 crc kubenswrapper[4900]: E1202 14:58:58.911190 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:59:00 crc kubenswrapper[4900]: I1202 14:59:00.279503 4900 scope.go:117] "RemoveContainer" containerID="d0f05bd33c91e566b3f47f46beb63ce3a10fb9bb00368aab9222e9f18b8c846b" Dec 02 14:59:13 crc kubenswrapper[4900]: I1202 14:59:13.909576 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:59:13 crc kubenswrapper[4900]: E1202 14:59:13.910415 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:59:28 crc kubenswrapper[4900]: I1202 14:59:28.909781 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:59:28 crc kubenswrapper[4900]: E1202 14:59:28.910582 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:59:39 crc kubenswrapper[4900]: I1202 14:59:39.910330 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:59:39 crc kubenswrapper[4900]: E1202 14:59:39.911341 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 14:59:53 crc kubenswrapper[4900]: I1202 14:59:53.910484 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 14:59:54 crc kubenswrapper[4900]: I1202 14:59:54.281280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"b795cb18b3b3ac1e48eb43f789980baa8ed78f685f14ca23961f153619a7bd73"} Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.170369 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr"] Dec 02 15:00:00 crc kubenswrapper[4900]: E1202 15:00:00.171268 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783ed7a-639d-408a-be5a-546eccfdc3ca" containerName="storage" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.171283 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783ed7a-639d-408a-be5a-546eccfdc3ca" containerName="storage" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.171461 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e783ed7a-639d-408a-be5a-546eccfdc3ca" containerName="storage" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.172052 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.174702 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.175048 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.183562 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr"] Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.220440 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cjs\" (UniqueName: \"kubernetes.io/projected/faaca8d7-2465-4690-992b-9b46293206c7-kube-api-access-79cjs\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.220618 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaca8d7-2465-4690-992b-9b46293206c7-secret-volume\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.220686 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaca8d7-2465-4690-992b-9b46293206c7-config-volume\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.322479 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79cjs\" (UniqueName: \"kubernetes.io/projected/faaca8d7-2465-4690-992b-9b46293206c7-kube-api-access-79cjs\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.322620 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaca8d7-2465-4690-992b-9b46293206c7-secret-volume\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.322706 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaca8d7-2465-4690-992b-9b46293206c7-config-volume\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.324757 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaca8d7-2465-4690-992b-9b46293206c7-config-volume\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.337061 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaca8d7-2465-4690-992b-9b46293206c7-secret-volume\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.346596 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79cjs\" (UniqueName: \"kubernetes.io/projected/faaca8d7-2465-4690-992b-9b46293206c7-kube-api-access-79cjs\") pod \"collect-profiles-29411460-lnljr\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.503281 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:00 crc kubenswrapper[4900]: I1202 15:00:00.984698 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr"] Dec 02 15:00:00 crc kubenswrapper[4900]: W1202 15:00:00.987392 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaaca8d7_2465_4690_992b_9b46293206c7.slice/crio-37fde6624a79bc5917fb5009540a5319d70f4d13dabfc4d37479e2723b826b57 WatchSource:0}: Error finding container 37fde6624a79bc5917fb5009540a5319d70f4d13dabfc4d37479e2723b826b57: Status 404 returned error can't find the container with id 37fde6624a79bc5917fb5009540a5319d70f4d13dabfc4d37479e2723b826b57 Dec 02 15:00:01 crc kubenswrapper[4900]: I1202 15:00:01.335478 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" event={"ID":"faaca8d7-2465-4690-992b-9b46293206c7","Type":"ContainerStarted","Data":"858b5396c8c7a21b72fc5cc14aad30a90ce312a02fe3e0309874e572b929fec9"} Dec 02 15:00:01 crc kubenswrapper[4900]: I1202 15:00:01.335889 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" event={"ID":"faaca8d7-2465-4690-992b-9b46293206c7","Type":"ContainerStarted","Data":"37fde6624a79bc5917fb5009540a5319d70f4d13dabfc4d37479e2723b826b57"} Dec 02 15:00:01 crc kubenswrapper[4900]: I1202 15:00:01.358431 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" podStartSLOduration=1.358409967 podStartE2EDuration="1.358409967s" podCreationTimestamp="2025-12-02 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:00:01.356003319 +0000 UTC m=+4646.771817170" watchObservedRunningTime="2025-12-02 15:00:01.358409967 +0000 UTC m=+4646.774223818" Dec 02 15:00:02 crc kubenswrapper[4900]: I1202 15:00:02.344596 4900 generic.go:334] "Generic (PLEG): container finished" podID="faaca8d7-2465-4690-992b-9b46293206c7" containerID="858b5396c8c7a21b72fc5cc14aad30a90ce312a02fe3e0309874e572b929fec9" exitCode=0 Dec 02 15:00:02 crc kubenswrapper[4900]: I1202 15:00:02.344690 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" event={"ID":"faaca8d7-2465-4690-992b-9b46293206c7","Type":"ContainerDied","Data":"858b5396c8c7a21b72fc5cc14aad30a90ce312a02fe3e0309874e572b929fec9"} Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.640479 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.671736 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79cjs\" (UniqueName: \"kubernetes.io/projected/faaca8d7-2465-4690-992b-9b46293206c7-kube-api-access-79cjs\") pod \"faaca8d7-2465-4690-992b-9b46293206c7\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.671792 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaca8d7-2465-4690-992b-9b46293206c7-config-volume\") pod \"faaca8d7-2465-4690-992b-9b46293206c7\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.671850 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaca8d7-2465-4690-992b-9b46293206c7-secret-volume\") pod \"faaca8d7-2465-4690-992b-9b46293206c7\" (UID: \"faaca8d7-2465-4690-992b-9b46293206c7\") " Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.672421 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faaca8d7-2465-4690-992b-9b46293206c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "faaca8d7-2465-4690-992b-9b46293206c7" (UID: "faaca8d7-2465-4690-992b-9b46293206c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.678114 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faaca8d7-2465-4690-992b-9b46293206c7-kube-api-access-79cjs" (OuterVolumeSpecName: "kube-api-access-79cjs") pod "faaca8d7-2465-4690-992b-9b46293206c7" (UID: "faaca8d7-2465-4690-992b-9b46293206c7"). InnerVolumeSpecName "kube-api-access-79cjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.678722 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faaca8d7-2465-4690-992b-9b46293206c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "faaca8d7-2465-4690-992b-9b46293206c7" (UID: "faaca8d7-2465-4690-992b-9b46293206c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.773309 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79cjs\" (UniqueName: \"kubernetes.io/projected/faaca8d7-2465-4690-992b-9b46293206c7-kube-api-access-79cjs\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.773344 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaca8d7-2465-4690-992b-9b46293206c7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:03 crc kubenswrapper[4900]: I1202 15:00:03.773354 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaca8d7-2465-4690-992b-9b46293206c7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:04 crc kubenswrapper[4900]: I1202 15:00:04.361812 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" event={"ID":"faaca8d7-2465-4690-992b-9b46293206c7","Type":"ContainerDied","Data":"37fde6624a79bc5917fb5009540a5319d70f4d13dabfc4d37479e2723b826b57"} Dec 02 15:00:04 crc kubenswrapper[4900]: I1202 15:00:04.361862 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fde6624a79bc5917fb5009540a5319d70f4d13dabfc4d37479e2723b826b57" Dec 02 15:00:04 crc kubenswrapper[4900]: I1202 15:00:04.361923 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr" Dec 02 15:00:04 crc kubenswrapper[4900]: I1202 15:00:04.435026 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj"] Dec 02 15:00:04 crc kubenswrapper[4900]: I1202 15:00:04.439543 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-pwscj"] Dec 02 15:00:04 crc kubenswrapper[4900]: I1202 15:00:04.923485 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef81ba3b-7225-4356-bc06-bd331da5c7be" path="/var/lib/kubelet/pods/ef81ba3b-7225-4356-bc06-bd331da5c7be/volumes" Dec 02 15:01:00 crc kubenswrapper[4900]: I1202 15:01:00.345349 4900 scope.go:117] "RemoveContainer" containerID="0d4d680e9af109ce282353ea140d6fee77441bc23989f17fc55f9103cdb5c551" Dec 02 15:01:39 crc kubenswrapper[4900]: I1202 15:01:39.990002 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xpj7s"] Dec 02 15:01:39 crc kubenswrapper[4900]: E1202 15:01:39.991281 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faaca8d7-2465-4690-992b-9b46293206c7" containerName="collect-profiles" Dec 02 15:01:39 crc kubenswrapper[4900]: I1202 15:01:39.991308 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="faaca8d7-2465-4690-992b-9b46293206c7" containerName="collect-profiles" Dec 02 15:01:39 crc kubenswrapper[4900]: I1202 15:01:39.991634 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="faaca8d7-2465-4690-992b-9b46293206c7" containerName="collect-profiles" Dec 02 15:01:39 crc kubenswrapper[4900]: I1202 15:01:39.993514 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.003354 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xpj7s"] Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.044036 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2ss\" (UniqueName: \"kubernetes.io/projected/2fb941bd-6874-4237-9da4-62575087d339-kube-api-access-lz2ss\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.044440 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-utilities\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.044661 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-catalog-content\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.146607 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-utilities\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.146705 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-catalog-content\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.146810 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2ss\" (UniqueName: \"kubernetes.io/projected/2fb941bd-6874-4237-9da4-62575087d339-kube-api-access-lz2ss\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.147382 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-utilities\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.147509 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-catalog-content\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.173481 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2ss\" (UniqueName: \"kubernetes.io/projected/2fb941bd-6874-4237-9da4-62575087d339-kube-api-access-lz2ss\") pod \"redhat-operators-xpj7s\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.351005 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:40 crc kubenswrapper[4900]: I1202 15:01:40.779905 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xpj7s"] Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.179304 4900 generic.go:334] "Generic (PLEG): container finished" podID="2fb941bd-6874-4237-9da4-62575087d339" containerID="8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c" exitCode=0 Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.179352 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerDied","Data":"8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c"} Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.179381 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerStarted","Data":"b623c0ec0f7f3fdd674d17641186a1f853e09a029b0866c6d0eb72c4ac53cc1c"} Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.938705 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-88zcr"] Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.940140 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.946036 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.946172 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.946614 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.946840 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.947074 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fdbmf" Dec 02 15:01:41 crc kubenswrapper[4900]: I1202 15:01:41.949137 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-88zcr"] Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.076340 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-config\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.076629 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4bjw\" (UniqueName: \"kubernetes.io/projected/930af4d8-9aa1-4432-bc69-13beb9a74812-kube-api-access-n4bjw\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.076707 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.178177 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-config\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.178416 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4bjw\" (UniqueName: \"kubernetes.io/projected/930af4d8-9aa1-4432-bc69-13beb9a74812-kube-api-access-n4bjw\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.178536 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.179515 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.180490 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-config\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.190810 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerStarted","Data":"8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62"} Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.219582 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4bjw\" (UniqueName: \"kubernetes.io/projected/930af4d8-9aa1-4432-bc69-13beb9a74812-kube-api-access-n4bjw\") pod \"dnsmasq-dns-5d7b5456f5-88zcr\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.318440 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.325981 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-d8588"] Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.327414 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.336985 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-d8588"] Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.383683 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-config\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.384031 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.384117 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4s4c\" (UniqueName: \"kubernetes.io/projected/4704db25-8ddf-4902-9b24-36c7fceaa5db-kube-api-access-s4s4c\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.485911 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-config\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.486212 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.486241 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4s4c\" (UniqueName: \"kubernetes.io/projected/4704db25-8ddf-4902-9b24-36c7fceaa5db-kube-api-access-s4s4c\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.487073 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-config\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.487330 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.509456 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4s4c\" (UniqueName: \"kubernetes.io/projected/4704db25-8ddf-4902-9b24-36c7fceaa5db-kube-api-access-s4s4c\") pod \"dnsmasq-dns-98ddfc8f-d8588\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.711761 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:42 crc kubenswrapper[4900]: I1202 15:01:42.815619 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-88zcr"] Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.121580 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.124519 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.130612 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t7jn4" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.130742 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.131162 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.131448 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.132936 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.165351 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.187921 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-d8588"] Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195223 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195301 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63b18a68-c198-4804-80a5-740f18072e29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195347 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195383 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195408 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkxz\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-kube-api-access-6bkxz\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195428 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195454 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63b18a68-c198-4804-80a5-740f18072e29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195488 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.195524 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.198201 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" event={"ID":"4704db25-8ddf-4902-9b24-36c7fceaa5db","Type":"ContainerStarted","Data":"21e2a9431982ad589f3318eae78b37f55d305a63404f6a2595c4aa5955bdee33"} Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.201753 4900 generic.go:334] "Generic (PLEG): container finished" podID="2fb941bd-6874-4237-9da4-62575087d339" containerID="8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62" exitCode=0 Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.201832 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerDied","Data":"8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62"} Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.202884 4900 generic.go:334] "Generic (PLEG): container finished" podID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerID="97ccddce075e1ab357119c615d54b43166b52175171201ca05174d8ac330e781" exitCode=0 Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.202914 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" event={"ID":"930af4d8-9aa1-4432-bc69-13beb9a74812","Type":"ContainerDied","Data":"97ccddce075e1ab357119c615d54b43166b52175171201ca05174d8ac330e781"} Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.202939 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" event={"ID":"930af4d8-9aa1-4432-bc69-13beb9a74812","Type":"ContainerStarted","Data":"2c034aa70444212b6daeda6a12d023fe8ffd9780e6b36088f6f029bdbca389ce"} Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.296503 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.296908 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.296952 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bkxz\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-kube-api-access-6bkxz\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.296986 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.297031 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63b18a68-c198-4804-80a5-740f18072e29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.297068 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.297154 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.297217 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.297305 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63b18a68-c198-4804-80a5-740f18072e29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.298066 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.298075 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.298494 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.298729 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.301986 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.302197 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9a81c7f807e9f6d59c0a6bb96e4ab27a20133b5fab806d5f68bdbf484461212/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.303207 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.303340 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63b18a68-c198-4804-80a5-740f18072e29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.309412 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63b18a68-c198-4804-80a5-740f18072e29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.341814 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bkxz\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-kube-api-access-6bkxz\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.342773 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: E1202 15:01:43.398836 4900 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 02 15:01:43 crc kubenswrapper[4900]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/930af4d8-9aa1-4432-bc69-13beb9a74812/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 02 15:01:43 crc kubenswrapper[4900]: > podSandboxID="2c034aa70444212b6daeda6a12d023fe8ffd9780e6b36088f6f029bdbca389ce" Dec 02 15:01:43 crc kubenswrapper[4900]: E1202 15:01:43.398997 4900 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 02 15:01:43 crc kubenswrapper[4900]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4bjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d7b5456f5-88zcr_openstack(930af4d8-9aa1-4432-bc69-13beb9a74812): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/930af4d8-9aa1-4432-bc69-13beb9a74812/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 02 15:01:43 crc kubenswrapper[4900]: > logger="UnhandledError" Dec 02 15:01:43 crc kubenswrapper[4900]: E1202 15:01:43.400136 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/930af4d8-9aa1-4432-bc69-13beb9a74812/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.447201 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.448416 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.450611 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.450686 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fw77l" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.450928 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.453263 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.456773 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.464673 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.467261 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.500804 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.500906 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.501541 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.501633 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.501787 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjdf\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-kube-api-access-6mjdf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.501860 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.501944 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.502099 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.502201 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.603762 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.605168 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.605859 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.604257 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.606521 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.606564 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.606823 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.607682 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjdf\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-kube-api-access-6mjdf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.607736 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.607799 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.607887 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.608325 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.609752 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.612505 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.616252 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.616296 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/abd2f31a42c10988e8066328fb23961a2f90c3f144a3dc004e0c04ac14fce2ce/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.624465 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.632054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.633732 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjdf\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-kube-api-access-6mjdf\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.662777 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.822187 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:01:43 crc kubenswrapper[4900]: I1202 15:01:43.938526 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.063893 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:01:44 crc kubenswrapper[4900]: W1202 15:01:44.087522 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ee4b7a_7a71_4379_95fa_b77cc7a414b7.slice/crio-73b0438b6dcf5aedcb70fdf245fb21f87d1254101a61712c0861f11cd023030e WatchSource:0}: Error finding container 73b0438b6dcf5aedcb70fdf245fb21f87d1254101a61712c0861f11cd023030e: Status 404 returned error can't find the container with id 73b0438b6dcf5aedcb70fdf245fb21f87d1254101a61712c0861f11cd023030e Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.210932 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"91ee4b7a-7a71-4379-95fa-b77cc7a414b7","Type":"ContainerStarted","Data":"73b0438b6dcf5aedcb70fdf245fb21f87d1254101a61712c0861f11cd023030e"} Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.212132 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63b18a68-c198-4804-80a5-740f18072e29","Type":"ContainerStarted","Data":"ed219b4d95df6f813aa27f6c8d1fedf631c88eb8b2bfef3c6b9b66a2377619f0"} Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.214213 4900 generic.go:334] "Generic (PLEG): container finished" podID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerID="21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29" exitCode=0 Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.214277 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" event={"ID":"4704db25-8ddf-4902-9b24-36c7fceaa5db","Type":"ContainerDied","Data":"21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29"} Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.218548 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerStarted","Data":"398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1"} Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.257753 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xpj7s" podStartSLOduration=2.714847477 podStartE2EDuration="5.257732139s" podCreationTimestamp="2025-12-02 15:01:39 +0000 UTC" firstStartedPulling="2025-12-02 15:01:41.181405424 +0000 UTC m=+4746.597219275" lastFinishedPulling="2025-12-02 15:01:43.724290086 +0000 UTC m=+4749.140103937" observedRunningTime="2025-12-02 15:01:44.255422684 +0000 UTC m=+4749.671236535" watchObservedRunningTime="2025-12-02 15:01:44.257732139 +0000 UTC m=+4749.673545990" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.540499 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.542590 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.544870 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.545047 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sh4ps" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.545099 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.545231 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.549459 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.572868 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730604 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730669 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49928444-72ed-4268-9e76-a6037b08afec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49928444-72ed-4268-9e76-a6037b08afec\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730726 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96bd872d-1991-4668-b075-19f9673dccd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730745 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frnz\" (UniqueName: \"kubernetes.io/projected/96bd872d-1991-4668-b075-19f9673dccd4-kube-api-access-2frnz\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730776 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd872d-1991-4668-b075-19f9673dccd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730792 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.730810 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd872d-1991-4668-b075-19f9673dccd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.731023 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832314 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832370 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49928444-72ed-4268-9e76-a6037b08afec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49928444-72ed-4268-9e76-a6037b08afec\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832425 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96bd872d-1991-4668-b075-19f9673dccd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832449 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2frnz\" (UniqueName: \"kubernetes.io/projected/96bd872d-1991-4668-b075-19f9673dccd4-kube-api-access-2frnz\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832476 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd872d-1991-4668-b075-19f9673dccd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832492 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832508 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd872d-1991-4668-b075-19f9673dccd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.832537 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.833349 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-kolla-config\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.833848 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/96bd872d-1991-4668-b075-19f9673dccd4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.834367 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.834431 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/96bd872d-1991-4668-b075-19f9673dccd4-config-data-default\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.836592 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.836653 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49928444-72ed-4268-9e76-a6037b08afec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49928444-72ed-4268-9e76-a6037b08afec\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4fab13e96057f5f051041c8597c02dd9a5d9dd6e3abd0e8861079ae65f2eda12/globalmount\"" pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.853869 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd872d-1991-4668-b075-19f9673dccd4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.855719 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd872d-1991-4668-b075-19f9673dccd4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.860539 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2frnz\" (UniqueName: \"kubernetes.io/projected/96bd872d-1991-4668-b075-19f9673dccd4-kube-api-access-2frnz\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:44 crc kubenswrapper[4900]: I1202 15:01:44.880939 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49928444-72ed-4268-9e76-a6037b08afec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49928444-72ed-4268-9e76-a6037b08afec\") pod \"openstack-galera-0\" (UID: \"96bd872d-1991-4668-b075-19f9673dccd4\") " pod="openstack/openstack-galera-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.084169 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.085337 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.087973 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vcstk" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.089212 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.107006 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.177267 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.236472 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" event={"ID":"930af4d8-9aa1-4432-bc69-13beb9a74812","Type":"ContainerStarted","Data":"00ebd2b29f91a9d364d7667aea634961e93d2751a752e7e7037b83cd1b84723e"} Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.237715 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.242516 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjlh\" (UniqueName: \"kubernetes.io/projected/b7187b9a-4969-4f77-b03e-51d588486060-kube-api-access-vsjlh\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.242625 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7187b9a-4969-4f77-b03e-51d588486060-config-data\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.242676 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7187b9a-4969-4f77-b03e-51d588486060-kolla-config\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.246804 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" event={"ID":"4704db25-8ddf-4902-9b24-36c7fceaa5db","Type":"ContainerStarted","Data":"cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb"} Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.246848 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.265436 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" podStartSLOduration=4.265416108 podStartE2EDuration="4.265416108s" podCreationTimestamp="2025-12-02 15:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:45.256873886 +0000 UTC m=+4750.672687737" watchObservedRunningTime="2025-12-02 15:01:45.265416108 +0000 UTC m=+4750.681229959" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.282043 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" podStartSLOduration=3.282027449 podStartE2EDuration="3.282027449s" podCreationTimestamp="2025-12-02 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:45.279306602 +0000 UTC m=+4750.695120453" watchObservedRunningTime="2025-12-02 15:01:45.282027449 +0000 UTC m=+4750.697841300" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.344132 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7187b9a-4969-4f77-b03e-51d588486060-config-data\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.344170 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7187b9a-4969-4f77-b03e-51d588486060-kolla-config\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.344271 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsjlh\" (UniqueName: \"kubernetes.io/projected/b7187b9a-4969-4f77-b03e-51d588486060-kube-api-access-vsjlh\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.345023 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7187b9a-4969-4f77-b03e-51d588486060-kolla-config\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.345740 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7187b9a-4969-4f77-b03e-51d588486060-config-data\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.363237 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsjlh\" (UniqueName: \"kubernetes.io/projected/b7187b9a-4969-4f77-b03e-51d588486060-kube-api-access-vsjlh\") pod \"memcached-0\" (UID: \"b7187b9a-4969-4f77-b03e-51d588486060\") " pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.401526 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.616932 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 15:01:45 crc kubenswrapper[4900]: I1202 15:01:45.877413 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 15:01:45 crc kubenswrapper[4900]: W1202 15:01:45.888308 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7187b9a_4969_4f77_b03e_51d588486060.slice/crio-cbdcdd2a982df7a9d1aa40c6120246b48b0992977d951c04542ae510e33cc8e7 WatchSource:0}: Error finding container cbdcdd2a982df7a9d1aa40c6120246b48b0992977d951c04542ae510e33cc8e7: Status 404 returned error can't find the container with id cbdcdd2a982df7a9d1aa40c6120246b48b0992977d951c04542ae510e33cc8e7 Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.039367 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.040584 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.042772 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.043298 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nsr48" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.043478 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.043659 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.050489 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156048 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d82f787a-45fd-4580-bf19-12e6995493c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156102 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjvf\" (UniqueName: \"kubernetes.io/projected/d82f787a-45fd-4580-bf19-12e6995493c1-kube-api-access-4qjvf\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156122 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f787a-45fd-4580-bf19-12e6995493c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156216 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156250 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156275 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156309 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d82f787a-45fd-4580-bf19-12e6995493c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.156366 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.254215 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"91ee4b7a-7a71-4379-95fa-b77cc7a414b7","Type":"ContainerStarted","Data":"34e8536c44720b188d52cd09dad67e0084387f482c2c8104fded625376cd3104"} Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.256223 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63b18a68-c198-4804-80a5-740f18072e29","Type":"ContainerStarted","Data":"b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7"} Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257290 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257355 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257382 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257442 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d82f787a-45fd-4580-bf19-12e6995493c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257494 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257527 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d82f787a-45fd-4580-bf19-12e6995493c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257562 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f787a-45fd-4580-bf19-12e6995493c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257579 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjvf\" (UniqueName: \"kubernetes.io/projected/d82f787a-45fd-4580-bf19-12e6995493c1-kube-api-access-4qjvf\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.257994 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"96bd872d-1991-4668-b075-19f9673dccd4","Type":"ContainerStarted","Data":"33c5cc109af60c73f4161c383582124ea05f61f36e2376a8e18dfcc8aacbe960"} Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.258033 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"96bd872d-1991-4668-b075-19f9673dccd4","Type":"ContainerStarted","Data":"e2df6f944406b94c550886bf3e18476e619f40ced2b78ddd2adf0160d6961659"} Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.258528 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.259664 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.259751 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82f787a-45fd-4580-bf19-12e6995493c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.260055 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d82f787a-45fd-4580-bf19-12e6995493c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.260749 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.260780 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/17c39b4a85b29f658e9d464e80d5159e50c06b0d4b871f8664035450a14ba685/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.260734 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b7187b9a-4969-4f77-b03e-51d588486060","Type":"ContainerStarted","Data":"04da9f7822f442997a9549ea0e31fc8f201c5c341083232538d7ac2df75550a1"} Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.260875 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b7187b9a-4969-4f77-b03e-51d588486060","Type":"ContainerStarted","Data":"cbdcdd2a982df7a9d1aa40c6120246b48b0992977d951c04542ae510e33cc8e7"} Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.266519 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f787a-45fd-4580-bf19-12e6995493c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.283107 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d82f787a-45fd-4580-bf19-12e6995493c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.291187 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjvf\" (UniqueName: \"kubernetes.io/projected/d82f787a-45fd-4580-bf19-12e6995493c1-kube-api-access-4qjvf\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.325057 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bacf561-7e45-49df-8cd7-0fbf9eccba68\") pod \"openstack-cell1-galera-0\" (UID: \"d82f787a-45fd-4580-bf19-12e6995493c1\") " pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.356741 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.3567201770000001 podStartE2EDuration="1.356720177s" podCreationTimestamp="2025-12-02 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:46.351846329 +0000 UTC m=+4751.767660180" watchObservedRunningTime="2025-12-02 15:01:46.356720177 +0000 UTC m=+4751.772534028" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.361259 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:46 crc kubenswrapper[4900]: I1202 15:01:46.806520 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 15:01:46 crc kubenswrapper[4900]: W1202 15:01:46.811435 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd82f787a_45fd_4580_bf19_12e6995493c1.slice/crio-6886ca49f8121fe608731a27a2e72bcd047358afb0b8f374d0dfd8ff6046a300 WatchSource:0}: Error finding container 6886ca49f8121fe608731a27a2e72bcd047358afb0b8f374d0dfd8ff6046a300: Status 404 returned error can't find the container with id 6886ca49f8121fe608731a27a2e72bcd047358afb0b8f374d0dfd8ff6046a300 Dec 02 15:01:47 crc kubenswrapper[4900]: I1202 15:01:47.269450 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d82f787a-45fd-4580-bf19-12e6995493c1","Type":"ContainerStarted","Data":"908e5c9a9653b282d269c58822e75c684df0b575f04c7c8c82bd8818eb619c1a"} Dec 02 15:01:47 crc kubenswrapper[4900]: I1202 15:01:47.269838 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d82f787a-45fd-4580-bf19-12e6995493c1","Type":"ContainerStarted","Data":"6886ca49f8121fe608731a27a2e72bcd047358afb0b8f374d0dfd8ff6046a300"} Dec 02 15:01:47 crc kubenswrapper[4900]: I1202 15:01:47.269888 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 02 15:01:50 crc kubenswrapper[4900]: I1202 15:01:50.351618 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:50 crc kubenswrapper[4900]: I1202 15:01:50.352114 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:50 crc kubenswrapper[4900]: I1202 15:01:50.403991 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:51 crc kubenswrapper[4900]: I1202 15:01:51.303844 4900 generic.go:334] "Generic (PLEG): container finished" podID="96bd872d-1991-4668-b075-19f9673dccd4" containerID="33c5cc109af60c73f4161c383582124ea05f61f36e2376a8e18dfcc8aacbe960" exitCode=0 Dec 02 15:01:51 crc kubenswrapper[4900]: I1202 15:01:51.303985 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"96bd872d-1991-4668-b075-19f9673dccd4","Type":"ContainerDied","Data":"33c5cc109af60c73f4161c383582124ea05f61f36e2376a8e18dfcc8aacbe960"} Dec 02 15:01:51 crc kubenswrapper[4900]: I1202 15:01:51.376938 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:51 crc kubenswrapper[4900]: I1202 15:01:51.423005 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xpj7s"] Dec 02 15:01:51 crc kubenswrapper[4900]: E1202 15:01:51.502992 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd82f787a_45fd_4580_bf19_12e6995493c1.slice/crio-908e5c9a9653b282d269c58822e75c684df0b575f04c7c8c82bd8818eb619c1a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd82f787a_45fd_4580_bf19_12e6995493c1.slice/crio-conmon-908e5c9a9653b282d269c58822e75c684df0b575f04c7c8c82bd8818eb619c1a.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.317581 4900 generic.go:334] "Generic (PLEG): container finished" podID="d82f787a-45fd-4580-bf19-12e6995493c1" containerID="908e5c9a9653b282d269c58822e75c684df0b575f04c7c8c82bd8818eb619c1a" exitCode=0 Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.317696 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d82f787a-45fd-4580-bf19-12e6995493c1","Type":"ContainerDied","Data":"908e5c9a9653b282d269c58822e75c684df0b575f04c7c8c82bd8818eb619c1a"} Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.319975 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.323950 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"96bd872d-1991-4668-b075-19f9673dccd4","Type":"ContainerStarted","Data":"68cb9074ef428f66a26936e52d298dd2f042dc9481c4ffa8095068fd4c87810e"} Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.408754 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.408719944 podStartE2EDuration="9.408719944s" podCreationTimestamp="2025-12-02 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:52.406204393 +0000 UTC m=+4757.822018274" watchObservedRunningTime="2025-12-02 15:01:52.408719944 +0000 UTC m=+4757.824533845" Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.714503 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:01:52 crc kubenswrapper[4900]: I1202 15:01:52.780712 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-88zcr"] Dec 02 15:01:53 crc kubenswrapper[4900]: I1202 15:01:53.332395 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d82f787a-45fd-4580-bf19-12e6995493c1","Type":"ContainerStarted","Data":"f29ff53940d76efd2b20e8409c34555e0c1806d9e17cd3d27458bb8efac3f57f"} Dec 02 15:01:53 crc kubenswrapper[4900]: I1202 15:01:53.332521 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerName="dnsmasq-dns" containerID="cri-o://00ebd2b29f91a9d364d7667aea634961e93d2751a752e7e7037b83cd1b84723e" gracePeriod=10 Dec 02 15:01:53 crc kubenswrapper[4900]: I1202 15:01:53.332701 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xpj7s" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="registry-server" containerID="cri-o://398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1" gracePeriod=2 Dec 02 15:01:53 crc kubenswrapper[4900]: I1202 15:01:53.356865 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.356844754 podStartE2EDuration="8.356844754s" podCreationTimestamp="2025-12-02 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:53.356294249 +0000 UTC m=+4758.772108100" watchObservedRunningTime="2025-12-02 15:01:53.356844754 +0000 UTC m=+4758.772658605" Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.344869 4900 generic.go:334] "Generic (PLEG): container finished" podID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerID="00ebd2b29f91a9d364d7667aea634961e93d2751a752e7e7037b83cd1b84723e" exitCode=0 Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.344980 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" event={"ID":"930af4d8-9aa1-4432-bc69-13beb9a74812","Type":"ContainerDied","Data":"00ebd2b29f91a9d364d7667aea634961e93d2751a752e7e7037b83cd1b84723e"} Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.917959 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.928716 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.996481 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-config\") pod \"930af4d8-9aa1-4432-bc69-13beb9a74812\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.996581 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-catalog-content\") pod \"2fb941bd-6874-4237-9da4-62575087d339\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.996671 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4bjw\" (UniqueName: \"kubernetes.io/projected/930af4d8-9aa1-4432-bc69-13beb9a74812-kube-api-access-n4bjw\") pod \"930af4d8-9aa1-4432-bc69-13beb9a74812\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.996716 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-dns-svc\") pod \"930af4d8-9aa1-4432-bc69-13beb9a74812\" (UID: \"930af4d8-9aa1-4432-bc69-13beb9a74812\") " Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.996740 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz2ss\" (UniqueName: \"kubernetes.io/projected/2fb941bd-6874-4237-9da4-62575087d339-kube-api-access-lz2ss\") pod \"2fb941bd-6874-4237-9da4-62575087d339\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.996784 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-utilities\") pod \"2fb941bd-6874-4237-9da4-62575087d339\" (UID: \"2fb941bd-6874-4237-9da4-62575087d339\") " Dec 02 15:01:54 crc kubenswrapper[4900]: I1202 15:01:54.998269 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-utilities" (OuterVolumeSpecName: "utilities") pod "2fb941bd-6874-4237-9da4-62575087d339" (UID: "2fb941bd-6874-4237-9da4-62575087d339"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.001914 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb941bd-6874-4237-9da4-62575087d339-kube-api-access-lz2ss" (OuterVolumeSpecName: "kube-api-access-lz2ss") pod "2fb941bd-6874-4237-9da4-62575087d339" (UID: "2fb941bd-6874-4237-9da4-62575087d339"). InnerVolumeSpecName "kube-api-access-lz2ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.006922 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930af4d8-9aa1-4432-bc69-13beb9a74812-kube-api-access-n4bjw" (OuterVolumeSpecName: "kube-api-access-n4bjw") pod "930af4d8-9aa1-4432-bc69-13beb9a74812" (UID: "930af4d8-9aa1-4432-bc69-13beb9a74812"). InnerVolumeSpecName "kube-api-access-n4bjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.028575 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "930af4d8-9aa1-4432-bc69-13beb9a74812" (UID: "930af4d8-9aa1-4432-bc69-13beb9a74812"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.035560 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-config" (OuterVolumeSpecName: "config") pod "930af4d8-9aa1-4432-bc69-13beb9a74812" (UID: "930af4d8-9aa1-4432-bc69-13beb9a74812"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.098987 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4bjw\" (UniqueName: \"kubernetes.io/projected/930af4d8-9aa1-4432-bc69-13beb9a74812-kube-api-access-n4bjw\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.099022 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.099034 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz2ss\" (UniqueName: \"kubernetes.io/projected/2fb941bd-6874-4237-9da4-62575087d339-kube-api-access-lz2ss\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.099044 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.099057 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930af4d8-9aa1-4432-bc69-13beb9a74812-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.109528 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fb941bd-6874-4237-9da4-62575087d339" (UID: "2fb941bd-6874-4237-9da4-62575087d339"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.177388 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.177892 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.200337 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb941bd-6874-4237-9da4-62575087d339-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.352770 4900 generic.go:334] "Generic (PLEG): container finished" podID="2fb941bd-6874-4237-9da4-62575087d339" containerID="398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1" exitCode=0 Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.352840 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerDied","Data":"398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1"} Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.352846 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpj7s" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.352869 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpj7s" event={"ID":"2fb941bd-6874-4237-9da4-62575087d339","Type":"ContainerDied","Data":"b623c0ec0f7f3fdd674d17641186a1f853e09a029b0866c6d0eb72c4ac53cc1c"} Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.352887 4900 scope.go:117] "RemoveContainer" containerID="398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.354495 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" event={"ID":"930af4d8-9aa1-4432-bc69-13beb9a74812","Type":"ContainerDied","Data":"2c034aa70444212b6daeda6a12d023fe8ffd9780e6b36088f6f029bdbca389ce"} Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.354549 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-88zcr" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.368910 4900 scope.go:117] "RemoveContainer" containerID="8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.392496 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-88zcr"] Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.392557 4900 scope.go:117] "RemoveContainer" containerID="8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.401095 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-88zcr"] Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.404208 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.409387 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xpj7s"] Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.413726 4900 scope.go:117] "RemoveContainer" containerID="398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1" Dec 02 15:01:55 crc kubenswrapper[4900]: E1202 15:01:55.414226 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1\": container with ID starting with 398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1 not found: ID does not exist" containerID="398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.414263 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1"} err="failed to get container status \"398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1\": rpc error: code = NotFound desc = could not find container \"398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1\": container with ID starting with 398a340f2447bfad447b28bccd38d3efa6c53a05d694a0e64e1d5e11b4c5caf1 not found: ID does not exist" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.414288 4900 scope.go:117] "RemoveContainer" containerID="8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62" Dec 02 15:01:55 crc kubenswrapper[4900]: E1202 15:01:55.414542 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62\": container with ID starting with 8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62 not found: ID does not exist" containerID="8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.414578 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62"} err="failed to get container status \"8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62\": rpc error: code = NotFound desc = could not find container \"8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62\": container with ID starting with 8b56df6391c942bab46463bbe91f466a5ca24b20d633d670b18a7394d334de62 not found: ID does not exist" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.414595 4900 scope.go:117] "RemoveContainer" containerID="8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c" Dec 02 15:01:55 crc kubenswrapper[4900]: E1202 15:01:55.414936 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c\": container with ID starting with 8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c not found: ID does not exist" containerID="8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.414975 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c"} err="failed to get container status \"8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c\": rpc error: code = NotFound desc = could not find container \"8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c\": container with ID starting with 8335f2edd243a7c11d76b80f756221cf127b3dff31c23100633fa8eedfca764c not found: ID does not exist" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.414992 4900 scope.go:117] "RemoveContainer" containerID="00ebd2b29f91a9d364d7667aea634961e93d2751a752e7e7037b83cd1b84723e" Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.415768 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xpj7s"] Dec 02 15:01:55 crc kubenswrapper[4900]: I1202 15:01:55.444014 4900 scope.go:117] "RemoveContainer" containerID="97ccddce075e1ab357119c615d54b43166b52175171201ca05174d8ac330e781" Dec 02 15:01:56 crc kubenswrapper[4900]: I1202 15:01:56.362569 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:56 crc kubenswrapper[4900]: I1202 15:01:56.363820 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:56 crc kubenswrapper[4900]: I1202 15:01:56.926113 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb941bd-6874-4237-9da4-62575087d339" path="/var/lib/kubelet/pods/2fb941bd-6874-4237-9da4-62575087d339/volumes" Dec 02 15:01:56 crc kubenswrapper[4900]: I1202 15:01:56.927364 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" path="/var/lib/kubelet/pods/930af4d8-9aa1-4432-bc69-13beb9a74812/volumes" Dec 02 15:01:57 crc kubenswrapper[4900]: I1202 15:01:57.354279 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 02 15:01:57 crc kubenswrapper[4900]: I1202 15:01:57.445325 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 02 15:01:58 crc kubenswrapper[4900]: I1202 15:01:58.713972 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 02 15:01:58 crc kubenswrapper[4900]: I1202 15:01:58.789516 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 02 15:02:15 crc kubenswrapper[4900]: I1202 15:02:15.117427 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:02:15 crc kubenswrapper[4900]: I1202 15:02:15.118032 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:02:17 crc kubenswrapper[4900]: I1202 15:02:17.546987 4900 generic.go:334] "Generic (PLEG): container finished" podID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerID="34e8536c44720b188d52cd09dad67e0084387f482c2c8104fded625376cd3104" exitCode=0 Dec 02 15:02:17 crc kubenswrapper[4900]: I1202 15:02:17.547086 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"91ee4b7a-7a71-4379-95fa-b77cc7a414b7","Type":"ContainerDied","Data":"34e8536c44720b188d52cd09dad67e0084387f482c2c8104fded625376cd3104"} Dec 02 15:02:17 crc kubenswrapper[4900]: I1202 15:02:17.552840 4900 generic.go:334] "Generic (PLEG): container finished" podID="63b18a68-c198-4804-80a5-740f18072e29" containerID="b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7" exitCode=0 Dec 02 15:02:17 crc kubenswrapper[4900]: I1202 15:02:17.552897 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63b18a68-c198-4804-80a5-740f18072e29","Type":"ContainerDied","Data":"b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7"} Dec 02 15:02:18 crc kubenswrapper[4900]: I1202 15:02:18.561530 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"91ee4b7a-7a71-4379-95fa-b77cc7a414b7","Type":"ContainerStarted","Data":"18d1038137c44629ee6fbeccd7bebb43f6683d744dd14415716e09e5b32a80c6"} Dec 02 15:02:18 crc kubenswrapper[4900]: I1202 15:02:18.562072 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:18 crc kubenswrapper[4900]: I1202 15:02:18.564821 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63b18a68-c198-4804-80a5-740f18072e29","Type":"ContainerStarted","Data":"2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3"} Dec 02 15:02:18 crc kubenswrapper[4900]: I1202 15:02:18.564994 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 15:02:18 crc kubenswrapper[4900]: I1202 15:02:18.588799 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.588784471 podStartE2EDuration="36.588784471s" podCreationTimestamp="2025-12-02 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:02:18.584463888 +0000 UTC m=+4784.000277749" watchObservedRunningTime="2025-12-02 15:02:18.588784471 +0000 UTC m=+4784.004598322" Dec 02 15:02:18 crc kubenswrapper[4900]: I1202 15:02:18.612063 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.61204263 podStartE2EDuration="36.61204263s" podCreationTimestamp="2025-12-02 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:02:18.607826961 +0000 UTC m=+4784.023640812" watchObservedRunningTime="2025-12-02 15:02:18.61204263 +0000 UTC m=+4784.027856481" Dec 02 15:02:33 crc kubenswrapper[4900]: I1202 15:02:33.470899 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 15:02:33 crc kubenswrapper[4900]: I1202 15:02:33.825968 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.072294 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-whglv"] Dec 02 15:02:39 crc kubenswrapper[4900]: E1202 15:02:39.073226 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="extract-utilities" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073239 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="extract-utilities" Dec 02 15:02:39 crc kubenswrapper[4900]: E1202 15:02:39.073250 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerName="init" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073256 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerName="init" Dec 02 15:02:39 crc kubenswrapper[4900]: E1202 15:02:39.073274 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerName="dnsmasq-dns" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073280 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerName="dnsmasq-dns" Dec 02 15:02:39 crc kubenswrapper[4900]: E1202 15:02:39.073292 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="registry-server" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073299 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="registry-server" Dec 02 15:02:39 crc kubenswrapper[4900]: E1202 15:02:39.073314 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="extract-content" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073321 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="extract-content" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073456 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb941bd-6874-4237-9da4-62575087d339" containerName="registry-server" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.073475 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="930af4d8-9aa1-4432-bc69-13beb9a74812" containerName="dnsmasq-dns" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.074199 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.094046 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-whglv"] Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.179695 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-config\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.180005 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.180167 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqzt\" (UniqueName: \"kubernetes.io/projected/121dcfbe-214b-4bae-86d1-10d236d28c4a-kube-api-access-xcqzt\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.281449 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-config\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.281509 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.281581 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcqzt\" (UniqueName: \"kubernetes.io/projected/121dcfbe-214b-4bae-86d1-10d236d28c4a-kube-api-access-xcqzt\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.282345 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-config\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.284165 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.300102 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcqzt\" (UniqueName: \"kubernetes.io/projected/121dcfbe-214b-4bae-86d1-10d236d28c4a-kube-api-access-xcqzt\") pod \"dnsmasq-dns-5b7946d7b9-whglv\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.395531 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.779790 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:02:39 crc kubenswrapper[4900]: I1202 15:02:39.835157 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-whglv"] Dec 02 15:02:39 crc kubenswrapper[4900]: W1202 15:02:39.843818 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod121dcfbe_214b_4bae_86d1_10d236d28c4a.slice/crio-8e4d07c7013b6c1a6f142bf233fb2cc884ea12fd5ee1c69dc41be77b55cd481d WatchSource:0}: Error finding container 8e4d07c7013b6c1a6f142bf233fb2cc884ea12fd5ee1c69dc41be77b55cd481d: Status 404 returned error can't find the container with id 8e4d07c7013b6c1a6f142bf233fb2cc884ea12fd5ee1c69dc41be77b55cd481d Dec 02 15:02:40 crc kubenswrapper[4900]: I1202 15:02:40.517171 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:02:40 crc kubenswrapper[4900]: I1202 15:02:40.732538 4900 generic.go:334] "Generic (PLEG): container finished" podID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerID="f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf" exitCode=0 Dec 02 15:02:40 crc kubenswrapper[4900]: I1202 15:02:40.732883 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" event={"ID":"121dcfbe-214b-4bae-86d1-10d236d28c4a","Type":"ContainerDied","Data":"f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf"} Dec 02 15:02:40 crc kubenswrapper[4900]: I1202 15:02:40.732932 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" event={"ID":"121dcfbe-214b-4bae-86d1-10d236d28c4a","Type":"ContainerStarted","Data":"8e4d07c7013b6c1a6f142bf233fb2cc884ea12fd5ee1c69dc41be77b55cd481d"} Dec 02 15:02:41 crc kubenswrapper[4900]: I1202 15:02:41.709631 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="rabbitmq" containerID="cri-o://2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3" gracePeriod=604799 Dec 02 15:02:41 crc kubenswrapper[4900]: I1202 15:02:41.741860 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" event={"ID":"121dcfbe-214b-4bae-86d1-10d236d28c4a","Type":"ContainerStarted","Data":"dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3"} Dec 02 15:02:41 crc kubenswrapper[4900]: I1202 15:02:41.742254 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:41 crc kubenswrapper[4900]: I1202 15:02:41.761714 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" podStartSLOduration=2.761696354 podStartE2EDuration="2.761696354s" podCreationTimestamp="2025-12-02 15:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:02:41.755707234 +0000 UTC m=+4807.171521085" watchObservedRunningTime="2025-12-02 15:02:41.761696354 +0000 UTC m=+4807.177510205" Dec 02 15:02:42 crc kubenswrapper[4900]: I1202 15:02:42.208791 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="rabbitmq" containerID="cri-o://18d1038137c44629ee6fbeccd7bebb43f6683d744dd14415716e09e5b32a80c6" gracePeriod=604799 Dec 02 15:02:43 crc kubenswrapper[4900]: I1202 15:02:43.468439 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.238:5672: connect: connection refused" Dec 02 15:02:43 crc kubenswrapper[4900]: I1202 15:02:43.822833 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.239:5672: connect: connection refused" Dec 02 15:02:45 crc kubenswrapper[4900]: I1202 15:02:45.117317 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:02:45 crc kubenswrapper[4900]: I1202 15:02:45.117410 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.572180 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628429 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628466 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63b18a68-c198-4804-80a5-740f18072e29-pod-info\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628511 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-erlang-cookie\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628543 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-confd\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628589 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bkxz\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-kube-api-access-6bkxz\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628629 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-plugins-conf\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628691 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63b18a68-c198-4804-80a5-740f18072e29-erlang-cookie-secret\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628711 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-server-conf\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.628734 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-plugins\") pod \"63b18a68-c198-4804-80a5-740f18072e29\" (UID: \"63b18a68-c198-4804-80a5-740f18072e29\") " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.629278 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.629346 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.629942 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.634014 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-kube-api-access-6bkxz" (OuterVolumeSpecName: "kube-api-access-6bkxz") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "kube-api-access-6bkxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.635135 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63b18a68-c198-4804-80a5-740f18072e29-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.642870 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/63b18a68-c198-4804-80a5-740f18072e29-pod-info" (OuterVolumeSpecName: "pod-info") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.652274 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a" (OuterVolumeSpecName: "persistence") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.655043 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-server-conf" (OuterVolumeSpecName: "server-conf") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.724301 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "63b18a68-c198-4804-80a5-740f18072e29" (UID: "63b18a68-c198-4804-80a5-740f18072e29"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729426 4900 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63b18a68-c198-4804-80a5-740f18072e29-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729448 4900 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729456 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729485 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") on node \"crc\" " Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729496 4900 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63b18a68-c198-4804-80a5-740f18072e29-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729505 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729514 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729521 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bkxz\" (UniqueName: \"kubernetes.io/projected/63b18a68-c198-4804-80a5-740f18072e29-kube-api-access-6bkxz\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.729530 4900 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63b18a68-c198-4804-80a5-740f18072e29-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.744797 4900 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.745189 4900 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a") on node "crc" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.804521 4900 generic.go:334] "Generic (PLEG): container finished" podID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerID="18d1038137c44629ee6fbeccd7bebb43f6683d744dd14415716e09e5b32a80c6" exitCode=0 Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.804609 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"91ee4b7a-7a71-4379-95fa-b77cc7a414b7","Type":"ContainerDied","Data":"18d1038137c44629ee6fbeccd7bebb43f6683d744dd14415716e09e5b32a80c6"} Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.808034 4900 generic.go:334] "Generic (PLEG): container finished" podID="63b18a68-c198-4804-80a5-740f18072e29" containerID="2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3" exitCode=0 Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.808082 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63b18a68-c198-4804-80a5-740f18072e29","Type":"ContainerDied","Data":"2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3"} Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.808110 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63b18a68-c198-4804-80a5-740f18072e29","Type":"ContainerDied","Data":"ed219b4d95df6f813aa27f6c8d1fedf631c88eb8b2bfef3c6b9b66a2377619f0"} Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.808125 4900 scope.go:117] "RemoveContainer" containerID="2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.808628 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.830812 4900 reconciler_common.go:293] "Volume detached for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.848741 4900 scope.go:117] "RemoveContainer" containerID="b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.851557 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.860708 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.879926 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:02:48 crc kubenswrapper[4900]: E1202 15:02:48.880264 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="rabbitmq" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.880283 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="rabbitmq" Dec 02 15:02:48 crc kubenswrapper[4900]: E1202 15:02:48.880306 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="setup-container" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.880312 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="setup-container" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.880445 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b18a68-c198-4804-80a5-740f18072e29" containerName="rabbitmq" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.884479 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.886544 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.886842 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.887059 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.887203 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t7jn4" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.889105 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.889445 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.889858 4900 scope.go:117] "RemoveContainer" containerID="2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3" Dec 02 15:02:48 crc kubenswrapper[4900]: E1202 15:02:48.892369 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3\": container with ID starting with 2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3 not found: ID does not exist" containerID="2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.892405 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3"} err="failed to get container status \"2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3\": rpc error: code = NotFound desc = could not find container \"2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3\": container with ID starting with 2e8e71e14ab013e3221e30160a0ad43925d8582e3d1db40654700bc4ece9e6c3 not found: ID does not exist" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.892427 4900 scope.go:117] "RemoveContainer" containerID="b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7" Dec 02 15:02:48 crc kubenswrapper[4900]: E1202 15:02:48.892853 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7\": container with ID starting with b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7 not found: ID does not exist" containerID="b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.892888 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7"} err="failed to get container status \"b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7\": rpc error: code = NotFound desc = could not find container \"b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7\": container with ID starting with b9f5acea889988c27ed08632744b141092b6d2383b4e1cbcb2f5bddf317d10f7 not found: ID does not exist" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.926339 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:48 crc kubenswrapper[4900]: I1202 15:02:48.926429 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b18a68-c198-4804-80a5-740f18072e29" path="/var/lib/kubelet/pods/63b18a68-c198-4804-80a5-740f18072e29/volumes" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.034687 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-erlang-cookie\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.034843 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.034880 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-erlang-cookie-secret\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.034910 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-plugins\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.034939 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-plugins-conf\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.034995 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjdf\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-kube-api-access-6mjdf\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035014 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-confd\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035030 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-server-conf\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035055 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-pod-info\") pod \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\" (UID: \"91ee4b7a-7a71-4379-95fa-b77cc7a414b7\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035238 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035294 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d9af8210-7aca-4a64-96f3-17906daaef91-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035316 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035514 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035584 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d9af8210-7aca-4a64-96f3-17906daaef91-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035618 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq284\" (UniqueName: \"kubernetes.io/projected/d9af8210-7aca-4a64-96f3-17906daaef91-kube-api-access-rq284\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035535 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035543 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035619 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035742 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d9af8210-7aca-4a64-96f3-17906daaef91-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035762 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035785 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d9af8210-7aca-4a64-96f3-17906daaef91-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035945 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035980 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.035994 4900 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.038225 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-kube-api-access-6mjdf" (OuterVolumeSpecName: "kube-api-access-6mjdf") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "kube-api-access-6mjdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.038610 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.039017 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-pod-info" (OuterVolumeSpecName: "pod-info") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.047355 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a" (OuterVolumeSpecName: "persistence") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.055428 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-server-conf" (OuterVolumeSpecName: "server-conf") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.105844 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "91ee4b7a-7a71-4379-95fa-b77cc7a414b7" (UID: "91ee4b7a-7a71-4379-95fa-b77cc7a414b7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137616 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d9af8210-7aca-4a64-96f3-17906daaef91-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137681 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137716 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d9af8210-7aca-4a64-96f3-17906daaef91-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137754 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137807 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d9af8210-7aca-4a64-96f3-17906daaef91-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137831 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137887 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137914 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq284\" (UniqueName: \"kubernetes.io/projected/d9af8210-7aca-4a64-96f3-17906daaef91-kube-api-access-rq284\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.137932 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d9af8210-7aca-4a64-96f3-17906daaef91-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138003 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") on node \"crc\" " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138022 4900 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138035 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mjdf\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-kube-api-access-6mjdf\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138059 4900 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138073 4900 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138084 4900 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ee4b7a-7a71-4379-95fa-b77cc7a414b7-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138577 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.138679 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.139186 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d9af8210-7aca-4a64-96f3-17906daaef91-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.139896 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d9af8210-7aca-4a64-96f3-17906daaef91-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.141282 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d9af8210-7aca-4a64-96f3-17906daaef91-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.141675 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d9af8210-7aca-4a64-96f3-17906daaef91-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.141683 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.141730 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9a81c7f807e9f6d59c0a6bb96e4ab27a20133b5fab806d5f68bdbf484461212/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.143350 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d9af8210-7aca-4a64-96f3-17906daaef91-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.154598 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq284\" (UniqueName: \"kubernetes.io/projected/d9af8210-7aca-4a64-96f3-17906daaef91-kube-api-access-rq284\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.162827 4900 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.162964 4900 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a") on node "crc" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.180899 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74681afc-4c78-4ca9-9ee4-cc31094a704a\") pod \"rabbitmq-server-0\" (UID: \"d9af8210-7aca-4a64-96f3-17906daaef91\") " pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.217097 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.239068 4900 reconciler_common.go:293] "Volume detached for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.396797 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.441993 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-d8588"] Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.442244 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerName="dnsmasq-dns" containerID="cri-o://cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb" gracePeriod=10 Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.612248 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.791093 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.817099 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d9af8210-7aca-4a64-96f3-17906daaef91","Type":"ContainerStarted","Data":"7e2fd00b27b23fd127275f073ecbdc4df5c5bcfd7ff1d6bb280d3cf403f0056a"} Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.819401 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.819380 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"91ee4b7a-7a71-4379-95fa-b77cc7a414b7","Type":"ContainerDied","Data":"73b0438b6dcf5aedcb70fdf245fb21f87d1254101a61712c0861f11cd023030e"} Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.819461 4900 scope.go:117] "RemoveContainer" containerID="18d1038137c44629ee6fbeccd7bebb43f6683d744dd14415716e09e5b32a80c6" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.822086 4900 generic.go:334] "Generic (PLEG): container finished" podID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerID="cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb" exitCode=0 Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.822120 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" event={"ID":"4704db25-8ddf-4902-9b24-36c7fceaa5db","Type":"ContainerDied","Data":"cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb"} Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.822144 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" event={"ID":"4704db25-8ddf-4902-9b24-36c7fceaa5db","Type":"ContainerDied","Data":"21e2a9431982ad589f3318eae78b37f55d305a63404f6a2595c4aa5955bdee33"} Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.822193 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-d8588" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.840730 4900 scope.go:117] "RemoveContainer" containerID="34e8536c44720b188d52cd09dad67e0084387f482c2c8104fded625376cd3104" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.848568 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-dns-svc\") pod \"4704db25-8ddf-4902-9b24-36c7fceaa5db\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.848628 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-config\") pod \"4704db25-8ddf-4902-9b24-36c7fceaa5db\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.848714 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4s4c\" (UniqueName: \"kubernetes.io/projected/4704db25-8ddf-4902-9b24-36c7fceaa5db-kube-api-access-s4s4c\") pod \"4704db25-8ddf-4902-9b24-36c7fceaa5db\" (UID: \"4704db25-8ddf-4902-9b24-36c7fceaa5db\") " Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.856502 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4704db25-8ddf-4902-9b24-36c7fceaa5db-kube-api-access-s4s4c" (OuterVolumeSpecName: "kube-api-access-s4s4c") pod "4704db25-8ddf-4902-9b24-36c7fceaa5db" (UID: "4704db25-8ddf-4902-9b24-36c7fceaa5db"). InnerVolumeSpecName "kube-api-access-s4s4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.858801 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.863830 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.873327 4900 scope.go:117] "RemoveContainer" containerID="cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.884372 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:02:49 crc kubenswrapper[4900]: E1202 15:02:49.889455 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerName="dnsmasq-dns" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.889486 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerName="dnsmasq-dns" Dec 02 15:02:49 crc kubenswrapper[4900]: E1202 15:02:49.889516 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerName="init" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.889548 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerName="init" Dec 02 15:02:49 crc kubenswrapper[4900]: E1202 15:02:49.889560 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="rabbitmq" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.889568 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="rabbitmq" Dec 02 15:02:49 crc kubenswrapper[4900]: E1202 15:02:49.889599 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="setup-container" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.889608 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="setup-container" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.889842 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" containerName="dnsmasq-dns" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.889888 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" containerName="rabbitmq" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.891905 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.894102 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.894400 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.895180 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.895321 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.895491 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fw77l" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.903859 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.918350 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-config" (OuterVolumeSpecName: "config") pod "4704db25-8ddf-4902-9b24-36c7fceaa5db" (UID: "4704db25-8ddf-4902-9b24-36c7fceaa5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.920151 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4704db25-8ddf-4902-9b24-36c7fceaa5db" (UID: "4704db25-8ddf-4902-9b24-36c7fceaa5db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.920888 4900 scope.go:117] "RemoveContainer" containerID="21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.939102 4900 scope.go:117] "RemoveContainer" containerID="cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb" Dec 02 15:02:49 crc kubenswrapper[4900]: E1202 15:02:49.939502 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb\": container with ID starting with cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb not found: ID does not exist" containerID="cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.939541 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb"} err="failed to get container status \"cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb\": rpc error: code = NotFound desc = could not find container \"cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb\": container with ID starting with cc8607e9c024143cf644b933c4fe6cb82729358ccc73a876101019138e6940cb not found: ID does not exist" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.939565 4900 scope.go:117] "RemoveContainer" containerID="21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29" Dec 02 15:02:49 crc kubenswrapper[4900]: E1202 15:02:49.939800 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29\": container with ID starting with 21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29 not found: ID does not exist" containerID="21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.939824 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29"} err="failed to get container status \"21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29\": rpc error: code = NotFound desc = could not find container \"21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29\": container with ID starting with 21db73ead88989902886df9c4db776ebd7d23bc0d668dc1b69c85e6d834a5f29 not found: ID does not exist" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.949801 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4s4c\" (UniqueName: \"kubernetes.io/projected/4704db25-8ddf-4902-9b24-36c7fceaa5db-kube-api-access-s4s4c\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.949828 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:49 crc kubenswrapper[4900]: I1202 15:02:49.949838 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4704db25-8ddf-4902-9b24-36c7fceaa5db-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.050876 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.050930 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.050986 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d8040323-8bc7-4ee9-bf46-c7f1499a653f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.051009 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbfh\" (UniqueName: \"kubernetes.io/projected/d8040323-8bc7-4ee9-bf46-c7f1499a653f-kube-api-access-zcbfh\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.051054 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.051094 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d8040323-8bc7-4ee9-bf46-c7f1499a653f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.051145 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d8040323-8bc7-4ee9-bf46-c7f1499a653f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.051169 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.051195 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d8040323-8bc7-4ee9-bf46-c7f1499a653f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152389 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152486 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d8040323-8bc7-4ee9-bf46-c7f1499a653f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152539 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152588 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152707 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d8040323-8bc7-4ee9-bf46-c7f1499a653f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152752 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbfh\" (UniqueName: \"kubernetes.io/projected/d8040323-8bc7-4ee9-bf46-c7f1499a653f-kube-api-access-zcbfh\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152837 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152900 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d8040323-8bc7-4ee9-bf46-c7f1499a653f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.152980 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d8040323-8bc7-4ee9-bf46-c7f1499a653f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.155579 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.155837 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.156510 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d8040323-8bc7-4ee9-bf46-c7f1499a653f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.157095 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d8040323-8bc7-4ee9-bf46-c7f1499a653f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.162987 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d8040323-8bc7-4ee9-bf46-c7f1499a653f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.163543 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d8040323-8bc7-4ee9-bf46-c7f1499a653f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.163960 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.164015 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/abd2f31a42c10988e8066328fb23961a2f90c3f144a3dc004e0c04ac14fce2ce/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.165559 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d8040323-8bc7-4ee9-bf46-c7f1499a653f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.177474 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbfh\" (UniqueName: \"kubernetes.io/projected/d8040323-8bc7-4ee9-bf46-c7f1499a653f-kube-api-access-zcbfh\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.178019 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-d8588"] Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.192028 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-d8588"] Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.214758 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7ae5d40a-c7d8-4f79-97ed-90ce7d0e0f2a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8040323-8bc7-4ee9-bf46-c7f1499a653f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.221628 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:02:50 crc kubenswrapper[4900]: W1202 15:02:50.446330 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8040323_8bc7_4ee9_bf46_c7f1499a653f.slice/crio-0fd166398dc1346fff809c696f119aab6fa5fff120b7d70e918b154624d810f9 WatchSource:0}: Error finding container 0fd166398dc1346fff809c696f119aab6fa5fff120b7d70e918b154624d810f9: Status 404 returned error can't find the container with id 0fd166398dc1346fff809c696f119aab6fa5fff120b7d70e918b154624d810f9 Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.447439 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.834238 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8040323-8bc7-4ee9-bf46-c7f1499a653f","Type":"ContainerStarted","Data":"0fd166398dc1346fff809c696f119aab6fa5fff120b7d70e918b154624d810f9"} Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.839034 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d9af8210-7aca-4a64-96f3-17906daaef91","Type":"ContainerStarted","Data":"aec00d232ebdaecfda4f15019826fffa153ead022fa0162444d1a5b7c8744344"} Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.919389 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4704db25-8ddf-4902-9b24-36c7fceaa5db" path="/var/lib/kubelet/pods/4704db25-8ddf-4902-9b24-36c7fceaa5db/volumes" Dec 02 15:02:50 crc kubenswrapper[4900]: I1202 15:02:50.920389 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ee4b7a-7a71-4379-95fa-b77cc7a414b7" path="/var/lib/kubelet/pods/91ee4b7a-7a71-4379-95fa-b77cc7a414b7/volumes" Dec 02 15:02:51 crc kubenswrapper[4900]: I1202 15:02:51.850455 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8040323-8bc7-4ee9-bf46-c7f1499a653f","Type":"ContainerStarted","Data":"abc7a31e9ea961398feb76a168a8da4957a4ace9f7661d60c246b86debf5b900"} Dec 02 15:03:15 crc kubenswrapper[4900]: I1202 15:03:15.116306 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:03:15 crc kubenswrapper[4900]: I1202 15:03:15.116925 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:03:15 crc kubenswrapper[4900]: I1202 15:03:15.116988 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:03:15 crc kubenswrapper[4900]: I1202 15:03:15.117950 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b795cb18b3b3ac1e48eb43f789980baa8ed78f685f14ca23961f153619a7bd73"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:03:15 crc kubenswrapper[4900]: I1202 15:03:15.118086 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://b795cb18b3b3ac1e48eb43f789980baa8ed78f685f14ca23961f153619a7bd73" gracePeriod=600 Dec 02 15:03:16 crc kubenswrapper[4900]: I1202 15:03:16.054270 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="b795cb18b3b3ac1e48eb43f789980baa8ed78f685f14ca23961f153619a7bd73" exitCode=0 Dec 02 15:03:16 crc kubenswrapper[4900]: I1202 15:03:16.054317 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"b795cb18b3b3ac1e48eb43f789980baa8ed78f685f14ca23961f153619a7bd73"} Dec 02 15:03:16 crc kubenswrapper[4900]: I1202 15:03:16.054699 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7"} Dec 02 15:03:16 crc kubenswrapper[4900]: I1202 15:03:16.054727 4900 scope.go:117] "RemoveContainer" containerID="93c4a43dbfe006142eaf05e7f29be27fd0e5803d6592faafbbe8c42a0ea4da1d" Dec 02 15:03:24 crc kubenswrapper[4900]: I1202 15:03:24.115616 4900 generic.go:334] "Generic (PLEG): container finished" podID="d9af8210-7aca-4a64-96f3-17906daaef91" containerID="aec00d232ebdaecfda4f15019826fffa153ead022fa0162444d1a5b7c8744344" exitCode=0 Dec 02 15:03:24 crc kubenswrapper[4900]: I1202 15:03:24.115682 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d9af8210-7aca-4a64-96f3-17906daaef91","Type":"ContainerDied","Data":"aec00d232ebdaecfda4f15019826fffa153ead022fa0162444d1a5b7c8744344"} Dec 02 15:03:25 crc kubenswrapper[4900]: I1202 15:03:25.126263 4900 generic.go:334] "Generic (PLEG): container finished" podID="d8040323-8bc7-4ee9-bf46-c7f1499a653f" containerID="abc7a31e9ea961398feb76a168a8da4957a4ace9f7661d60c246b86debf5b900" exitCode=0 Dec 02 15:03:25 crc kubenswrapper[4900]: I1202 15:03:25.126348 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8040323-8bc7-4ee9-bf46-c7f1499a653f","Type":"ContainerDied","Data":"abc7a31e9ea961398feb76a168a8da4957a4ace9f7661d60c246b86debf5b900"} Dec 02 15:03:25 crc kubenswrapper[4900]: I1202 15:03:25.129426 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d9af8210-7aca-4a64-96f3-17906daaef91","Type":"ContainerStarted","Data":"8a2e1e0bc8baf9f06694672cffeba32c5ce905a64f4816b71217ec0e2b88c75f"} Dec 02 15:03:25 crc kubenswrapper[4900]: I1202 15:03:25.130765 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 15:03:25 crc kubenswrapper[4900]: I1202 15:03:25.187585 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.187567147 podStartE2EDuration="37.187567147s" podCreationTimestamp="2025-12-02 15:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:03:25.185453247 +0000 UTC m=+4850.601267108" watchObservedRunningTime="2025-12-02 15:03:25.187567147 +0000 UTC m=+4850.603380998" Dec 02 15:03:26 crc kubenswrapper[4900]: I1202 15:03:26.140386 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8040323-8bc7-4ee9-bf46-c7f1499a653f","Type":"ContainerStarted","Data":"008e627ea5cd919182e85fdab0e0151f97f88e98c1e19a5cd4fdca3329af8cb5"} Dec 02 15:03:26 crc kubenswrapper[4900]: I1202 15:03:26.141270 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:03:26 crc kubenswrapper[4900]: I1202 15:03:26.165125 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.16508394 podStartE2EDuration="37.16508394s" podCreationTimestamp="2025-12-02 15:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:03:26.161000744 +0000 UTC m=+4851.576814605" watchObservedRunningTime="2025-12-02 15:03:26.16508394 +0000 UTC m=+4851.580897791" Dec 02 15:03:39 crc kubenswrapper[4900]: I1202 15:03:39.222752 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 15:03:40 crc kubenswrapper[4900]: I1202 15:03:40.225849 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.533481 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.535503 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.546752 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rpf9s" Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.547685 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.600040 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnd2h\" (UniqueName: \"kubernetes.io/projected/17ae16c8-8225-4e3c-88e0-9a6ef6de9216-kube-api-access-vnd2h\") pod \"mariadb-client-1-default\" (UID: \"17ae16c8-8225-4e3c-88e0-9a6ef6de9216\") " pod="openstack/mariadb-client-1-default" Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.701849 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnd2h\" (UniqueName: \"kubernetes.io/projected/17ae16c8-8225-4e3c-88e0-9a6ef6de9216-kube-api-access-vnd2h\") pod \"mariadb-client-1-default\" (UID: \"17ae16c8-8225-4e3c-88e0-9a6ef6de9216\") " pod="openstack/mariadb-client-1-default" Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.736940 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnd2h\" (UniqueName: \"kubernetes.io/projected/17ae16c8-8225-4e3c-88e0-9a6ef6de9216-kube-api-access-vnd2h\") pod \"mariadb-client-1-default\" (UID: \"17ae16c8-8225-4e3c-88e0-9a6ef6de9216\") " pod="openstack/mariadb-client-1-default" Dec 02 15:03:51 crc kubenswrapper[4900]: I1202 15:03:51.866345 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 02 15:03:52 crc kubenswrapper[4900]: I1202 15:03:52.417288 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 02 15:03:53 crc kubenswrapper[4900]: I1202 15:03:53.397870 4900 generic.go:334] "Generic (PLEG): container finished" podID="17ae16c8-8225-4e3c-88e0-9a6ef6de9216" containerID="a403c38d9b3918b1f1ae6dd619c5bfd9590ebb05f7a2b8ecef73c59ca792a54e" exitCode=0 Dec 02 15:03:53 crc kubenswrapper[4900]: I1202 15:03:53.398243 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"17ae16c8-8225-4e3c-88e0-9a6ef6de9216","Type":"ContainerDied","Data":"a403c38d9b3918b1f1ae6dd619c5bfd9590ebb05f7a2b8ecef73c59ca792a54e"} Dec 02 15:03:53 crc kubenswrapper[4900]: I1202 15:03:53.398282 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"17ae16c8-8225-4e3c-88e0-9a6ef6de9216","Type":"ContainerStarted","Data":"aca48afaea536a875ea45405985728f78f85051be933303e59dda07d0365cfec"} Dec 02 15:03:54 crc kubenswrapper[4900]: I1202 15:03:54.755515 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 02 15:03:54 crc kubenswrapper[4900]: I1202 15:03:54.782356 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_17ae16c8-8225-4e3c-88e0-9a6ef6de9216/mariadb-client-1-default/0.log" Dec 02 15:03:54 crc kubenswrapper[4900]: I1202 15:03:54.810553 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 02 15:03:54 crc kubenswrapper[4900]: I1202 15:03:54.818539 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 02 15:03:54 crc kubenswrapper[4900]: I1202 15:03:54.853102 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnd2h\" (UniqueName: \"kubernetes.io/projected/17ae16c8-8225-4e3c-88e0-9a6ef6de9216-kube-api-access-vnd2h\") pod \"17ae16c8-8225-4e3c-88e0-9a6ef6de9216\" (UID: \"17ae16c8-8225-4e3c-88e0-9a6ef6de9216\") " Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.154524 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ae16c8-8225-4e3c-88e0-9a6ef6de9216-kube-api-access-vnd2h" (OuterVolumeSpecName: "kube-api-access-vnd2h") pod "17ae16c8-8225-4e3c-88e0-9a6ef6de9216" (UID: "17ae16c8-8225-4e3c-88e0-9a6ef6de9216"). InnerVolumeSpecName "kube-api-access-vnd2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.157251 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnd2h\" (UniqueName: \"kubernetes.io/projected/17ae16c8-8225-4e3c-88e0-9a6ef6de9216-kube-api-access-vnd2h\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.322076 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Dec 02 15:03:55 crc kubenswrapper[4900]: E1202 15:03:55.323144 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ae16c8-8225-4e3c-88e0-9a6ef6de9216" containerName="mariadb-client-1-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.323341 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ae16c8-8225-4e3c-88e0-9a6ef6de9216" containerName="mariadb-client-1-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.323895 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ae16c8-8225-4e3c-88e0-9a6ef6de9216" containerName="mariadb-client-1-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.324961 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.329164 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.361216 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97kj\" (UniqueName: \"kubernetes.io/projected/08f7dd85-4980-47b2-b063-197c97adde46-kube-api-access-z97kj\") pod \"mariadb-client-2-default\" (UID: \"08f7dd85-4980-47b2-b063-197c97adde46\") " pod="openstack/mariadb-client-2-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.414778 4900 scope.go:117] "RemoveContainer" containerID="a403c38d9b3918b1f1ae6dd619c5bfd9590ebb05f7a2b8ecef73c59ca792a54e" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.414903 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.463110 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97kj\" (UniqueName: \"kubernetes.io/projected/08f7dd85-4980-47b2-b063-197c97adde46-kube-api-access-z97kj\") pod \"mariadb-client-2-default\" (UID: \"08f7dd85-4980-47b2-b063-197c97adde46\") " pod="openstack/mariadb-client-2-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.483945 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97kj\" (UniqueName: \"kubernetes.io/projected/08f7dd85-4980-47b2-b063-197c97adde46-kube-api-access-z97kj\") pod \"mariadb-client-2-default\" (UID: \"08f7dd85-4980-47b2-b063-197c97adde46\") " pod="openstack/mariadb-client-2-default" Dec 02 15:03:55 crc kubenswrapper[4900]: I1202 15:03:55.654957 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 02 15:03:56 crc kubenswrapper[4900]: I1202 15:03:56.237489 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 02 15:03:56 crc kubenswrapper[4900]: I1202 15:03:56.423916 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"08f7dd85-4980-47b2-b063-197c97adde46","Type":"ContainerStarted","Data":"c4a8887994f0ceef3e4b1f2367eba00b9ee5bd11243fcb24d42c5f9ff062e47b"} Dec 02 15:03:56 crc kubenswrapper[4900]: I1202 15:03:56.423961 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"08f7dd85-4980-47b2-b063-197c97adde46","Type":"ContainerStarted","Data":"96ef303bb175bf8c7190c6d6c17b30e0f7a745a67bec09eb39139da335e6c176"} Dec 02 15:03:56 crc kubenswrapper[4900]: I1202 15:03:56.436728 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.4367010439999999 podStartE2EDuration="1.436701044s" podCreationTimestamp="2025-12-02 15:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:03:56.433490903 +0000 UTC m=+4881.849304754" watchObservedRunningTime="2025-12-02 15:03:56.436701044 +0000 UTC m=+4881.852514905" Dec 02 15:03:56 crc kubenswrapper[4900]: I1202 15:03:56.928186 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ae16c8-8225-4e3c-88e0-9a6ef6de9216" path="/var/lib/kubelet/pods/17ae16c8-8225-4e3c-88e0-9a6ef6de9216/volumes" Dec 02 15:03:57 crc kubenswrapper[4900]: I1202 15:03:57.438349 4900 generic.go:334] "Generic (PLEG): container finished" podID="08f7dd85-4980-47b2-b063-197c97adde46" containerID="c4a8887994f0ceef3e4b1f2367eba00b9ee5bd11243fcb24d42c5f9ff062e47b" exitCode=1 Dec 02 15:03:57 crc kubenswrapper[4900]: I1202 15:03:57.438393 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"08f7dd85-4980-47b2-b063-197c97adde46","Type":"ContainerDied","Data":"c4a8887994f0ceef3e4b1f2367eba00b9ee5bd11243fcb24d42c5f9ff062e47b"} Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.762848 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.799175 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.805364 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.809782 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97kj\" (UniqueName: \"kubernetes.io/projected/08f7dd85-4980-47b2-b063-197c97adde46-kube-api-access-z97kj\") pod \"08f7dd85-4980-47b2-b063-197c97adde46\" (UID: \"08f7dd85-4980-47b2-b063-197c97adde46\") " Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.815301 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f7dd85-4980-47b2-b063-197c97adde46-kube-api-access-z97kj" (OuterVolumeSpecName: "kube-api-access-z97kj") pod "08f7dd85-4980-47b2-b063-197c97adde46" (UID: "08f7dd85-4980-47b2-b063-197c97adde46"). InnerVolumeSpecName "kube-api-access-z97kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.911710 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97kj\" (UniqueName: \"kubernetes.io/projected/08f7dd85-4980-47b2-b063-197c97adde46-kube-api-access-z97kj\") on node \"crc\" DevicePath \"\"" Dec 02 15:03:58 crc kubenswrapper[4900]: I1202 15:03:58.922322 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f7dd85-4980-47b2-b063-197c97adde46" path="/var/lib/kubelet/pods/08f7dd85-4980-47b2-b063-197c97adde46/volumes" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.286357 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Dec 02 15:03:59 crc kubenswrapper[4900]: E1202 15:03:59.287244 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f7dd85-4980-47b2-b063-197c97adde46" containerName="mariadb-client-2-default" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.287275 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f7dd85-4980-47b2-b063-197c97adde46" containerName="mariadb-client-2-default" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.287574 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f7dd85-4980-47b2-b063-197c97adde46" containerName="mariadb-client-2-default" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.288540 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.306380 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.319551 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltn7j\" (UniqueName: \"kubernetes.io/projected/27fdf174-009b-4707-a476-8d88c04c7228-kube-api-access-ltn7j\") pod \"mariadb-client-1\" (UID: \"27fdf174-009b-4707-a476-8d88c04c7228\") " pod="openstack/mariadb-client-1" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.420810 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltn7j\" (UniqueName: \"kubernetes.io/projected/27fdf174-009b-4707-a476-8d88c04c7228-kube-api-access-ltn7j\") pod \"mariadb-client-1\" (UID: \"27fdf174-009b-4707-a476-8d88c04c7228\") " pod="openstack/mariadb-client-1" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.442842 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltn7j\" (UniqueName: \"kubernetes.io/projected/27fdf174-009b-4707-a476-8d88c04c7228-kube-api-access-ltn7j\") pod \"mariadb-client-1\" (UID: \"27fdf174-009b-4707-a476-8d88c04c7228\") " pod="openstack/mariadb-client-1" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.462960 4900 scope.go:117] "RemoveContainer" containerID="c4a8887994f0ceef3e4b1f2367eba00b9ee5bd11243fcb24d42c5f9ff062e47b" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.463386 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 02 15:03:59 crc kubenswrapper[4900]: I1202 15:03:59.627834 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 02 15:04:00 crc kubenswrapper[4900]: I1202 15:04:00.189972 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 02 15:04:00 crc kubenswrapper[4900]: W1202 15:04:00.193586 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27fdf174_009b_4707_a476_8d88c04c7228.slice/crio-353fd20fae6d6f2158d97ee83986804cdff4adc63d102656d6be77c8b17b5434 WatchSource:0}: Error finding container 353fd20fae6d6f2158d97ee83986804cdff4adc63d102656d6be77c8b17b5434: Status 404 returned error can't find the container with id 353fd20fae6d6f2158d97ee83986804cdff4adc63d102656d6be77c8b17b5434 Dec 02 15:04:00 crc kubenswrapper[4900]: I1202 15:04:00.472454 4900 generic.go:334] "Generic (PLEG): container finished" podID="27fdf174-009b-4707-a476-8d88c04c7228" containerID="3a08432bdff7117e29874a0e922811c6b8d6431232df919945632f0348872cb7" exitCode=0 Dec 02 15:04:00 crc kubenswrapper[4900]: I1202 15:04:00.472548 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"27fdf174-009b-4707-a476-8d88c04c7228","Type":"ContainerDied","Data":"3a08432bdff7117e29874a0e922811c6b8d6431232df919945632f0348872cb7"} Dec 02 15:04:00 crc kubenswrapper[4900]: I1202 15:04:00.472705 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"27fdf174-009b-4707-a476-8d88c04c7228","Type":"ContainerStarted","Data":"353fd20fae6d6f2158d97ee83986804cdff4adc63d102656d6be77c8b17b5434"} Dec 02 15:04:01 crc kubenswrapper[4900]: I1202 15:04:01.915415 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 02 15:04:01 crc kubenswrapper[4900]: I1202 15:04:01.948891 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_27fdf174-009b-4707-a476-8d88c04c7228/mariadb-client-1/0.log" Dec 02 15:04:01 crc kubenswrapper[4900]: I1202 15:04:01.988253 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Dec 02 15:04:01 crc kubenswrapper[4900]: I1202 15:04:01.999853 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.063529 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltn7j\" (UniqueName: \"kubernetes.io/projected/27fdf174-009b-4707-a476-8d88c04c7228-kube-api-access-ltn7j\") pod \"27fdf174-009b-4707-a476-8d88c04c7228\" (UID: \"27fdf174-009b-4707-a476-8d88c04c7228\") " Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.076919 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fdf174-009b-4707-a476-8d88c04c7228-kube-api-access-ltn7j" (OuterVolumeSpecName: "kube-api-access-ltn7j") pod "27fdf174-009b-4707-a476-8d88c04c7228" (UID: "27fdf174-009b-4707-a476-8d88c04c7228"). InnerVolumeSpecName "kube-api-access-ltn7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.165213 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltn7j\" (UniqueName: \"kubernetes.io/projected/27fdf174-009b-4707-a476-8d88c04c7228-kube-api-access-ltn7j\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.378176 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Dec 02 15:04:02 crc kubenswrapper[4900]: E1202 15:04:02.378700 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fdf174-009b-4707-a476-8d88c04c7228" containerName="mariadb-client-1" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.378732 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fdf174-009b-4707-a476-8d88c04c7228" containerName="mariadb-client-1" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.379083 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fdf174-009b-4707-a476-8d88c04c7228" containerName="mariadb-client-1" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.379992 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.388603 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.471043 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbdp\" (UniqueName: \"kubernetes.io/projected/bea53af7-e0a1-4d21-8525-4288cd88ecd7-kube-api-access-nxbdp\") pod \"mariadb-client-4-default\" (UID: \"bea53af7-e0a1-4d21-8525-4288cd88ecd7\") " pod="openstack/mariadb-client-4-default" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.493298 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="353fd20fae6d6f2158d97ee83986804cdff4adc63d102656d6be77c8b17b5434" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.493337 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.572169 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbdp\" (UniqueName: \"kubernetes.io/projected/bea53af7-e0a1-4d21-8525-4288cd88ecd7-kube-api-access-nxbdp\") pod \"mariadb-client-4-default\" (UID: \"bea53af7-e0a1-4d21-8525-4288cd88ecd7\") " pod="openstack/mariadb-client-4-default" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.591007 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbdp\" (UniqueName: \"kubernetes.io/projected/bea53af7-e0a1-4d21-8525-4288cd88ecd7-kube-api-access-nxbdp\") pod \"mariadb-client-4-default\" (UID: \"bea53af7-e0a1-4d21-8525-4288cd88ecd7\") " pod="openstack/mariadb-client-4-default" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.714386 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 02 15:04:02 crc kubenswrapper[4900]: I1202 15:04:02.918997 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27fdf174-009b-4707-a476-8d88c04c7228" path="/var/lib/kubelet/pods/27fdf174-009b-4707-a476-8d88c04c7228/volumes" Dec 02 15:04:03 crc kubenswrapper[4900]: I1202 15:04:03.264796 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 02 15:04:03 crc kubenswrapper[4900]: W1202 15:04:03.268487 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea53af7_e0a1_4d21_8525_4288cd88ecd7.slice/crio-496ba2848884dcc4cb186d2a5ec54f2efd260e872d7bebf5404c6d1ef8e78332 WatchSource:0}: Error finding container 496ba2848884dcc4cb186d2a5ec54f2efd260e872d7bebf5404c6d1ef8e78332: Status 404 returned error can't find the container with id 496ba2848884dcc4cb186d2a5ec54f2efd260e872d7bebf5404c6d1ef8e78332 Dec 02 15:04:03 crc kubenswrapper[4900]: I1202 15:04:03.500907 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"bea53af7-e0a1-4d21-8525-4288cd88ecd7","Type":"ContainerStarted","Data":"496ba2848884dcc4cb186d2a5ec54f2efd260e872d7bebf5404c6d1ef8e78332"} Dec 02 15:04:04 crc kubenswrapper[4900]: I1202 15:04:04.510700 4900 generic.go:334] "Generic (PLEG): container finished" podID="bea53af7-e0a1-4d21-8525-4288cd88ecd7" containerID="537e8e418df125091c3c42753b8781f4e7a707c9d65f8abeb1a8fecdefe40593" exitCode=0 Dec 02 15:04:04 crc kubenswrapper[4900]: I1202 15:04:04.510761 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"bea53af7-e0a1-4d21-8525-4288cd88ecd7","Type":"ContainerDied","Data":"537e8e418df125091c3c42753b8781f4e7a707c9d65f8abeb1a8fecdefe40593"} Dec 02 15:04:05 crc kubenswrapper[4900]: I1202 15:04:05.858552 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 02 15:04:05 crc kubenswrapper[4900]: I1202 15:04:05.876697 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_bea53af7-e0a1-4d21-8525-4288cd88ecd7/mariadb-client-4-default/0.log" Dec 02 15:04:05 crc kubenswrapper[4900]: I1202 15:04:05.895961 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 02 15:04:05 crc kubenswrapper[4900]: I1202 15:04:05.900977 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 02 15:04:06 crc kubenswrapper[4900]: I1202 15:04:06.018615 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbdp\" (UniqueName: \"kubernetes.io/projected/bea53af7-e0a1-4d21-8525-4288cd88ecd7-kube-api-access-nxbdp\") pod \"bea53af7-e0a1-4d21-8525-4288cd88ecd7\" (UID: \"bea53af7-e0a1-4d21-8525-4288cd88ecd7\") " Dec 02 15:04:06 crc kubenswrapper[4900]: I1202 15:04:06.255132 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea53af7-e0a1-4d21-8525-4288cd88ecd7-kube-api-access-nxbdp" (OuterVolumeSpecName: "kube-api-access-nxbdp") pod "bea53af7-e0a1-4d21-8525-4288cd88ecd7" (UID: "bea53af7-e0a1-4d21-8525-4288cd88ecd7"). InnerVolumeSpecName "kube-api-access-nxbdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:06 crc kubenswrapper[4900]: I1202 15:04:06.323870 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbdp\" (UniqueName: \"kubernetes.io/projected/bea53af7-e0a1-4d21-8525-4288cd88ecd7-kube-api-access-nxbdp\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:06 crc kubenswrapper[4900]: I1202 15:04:06.524515 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496ba2848884dcc4cb186d2a5ec54f2efd260e872d7bebf5404c6d1ef8e78332" Dec 02 15:04:06 crc kubenswrapper[4900]: I1202 15:04:06.524583 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 02 15:04:06 crc kubenswrapper[4900]: I1202 15:04:06.923173 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea53af7-e0a1-4d21-8525-4288cd88ecd7" path="/var/lib/kubelet/pods/bea53af7-e0a1-4d21-8525-4288cd88ecd7/volumes" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.298349 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Dec 02 15:04:10 crc kubenswrapper[4900]: E1202 15:04:10.298782 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea53af7-e0a1-4d21-8525-4288cd88ecd7" containerName="mariadb-client-4-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.298797 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea53af7-e0a1-4d21-8525-4288cd88ecd7" containerName="mariadb-client-4-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.298962 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea53af7-e0a1-4d21-8525-4288cd88ecd7" containerName="mariadb-client-4-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.299469 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.301409 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rpf9s" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.310951 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.484901 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9xn\" (UniqueName: \"kubernetes.io/projected/7ab07775-0cb1-44f7-b1e4-17f09ffca625-kube-api-access-gm9xn\") pod \"mariadb-client-5-default\" (UID: \"7ab07775-0cb1-44f7-b1e4-17f09ffca625\") " pod="openstack/mariadb-client-5-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.586946 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm9xn\" (UniqueName: \"kubernetes.io/projected/7ab07775-0cb1-44f7-b1e4-17f09ffca625-kube-api-access-gm9xn\") pod \"mariadb-client-5-default\" (UID: \"7ab07775-0cb1-44f7-b1e4-17f09ffca625\") " pod="openstack/mariadb-client-5-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.610867 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm9xn\" (UniqueName: \"kubernetes.io/projected/7ab07775-0cb1-44f7-b1e4-17f09ffca625-kube-api-access-gm9xn\") pod \"mariadb-client-5-default\" (UID: \"7ab07775-0cb1-44f7-b1e4-17f09ffca625\") " pod="openstack/mariadb-client-5-default" Dec 02 15:04:10 crc kubenswrapper[4900]: I1202 15:04:10.625679 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 02 15:04:11 crc kubenswrapper[4900]: I1202 15:04:11.134915 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 02 15:04:11 crc kubenswrapper[4900]: I1202 15:04:11.595173 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"7ab07775-0cb1-44f7-b1e4-17f09ffca625","Type":"ContainerStarted","Data":"1a7cd29a36747baa0b5332263ec93604ab69ede92c369fb077833d62b08fc2ed"} Dec 02 15:04:11 crc kubenswrapper[4900]: I1202 15:04:11.595556 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"7ab07775-0cb1-44f7-b1e4-17f09ffca625","Type":"ContainerStarted","Data":"92b499e0fa19f2c9e0c90c84cdf1a6069b7223df3e2db91ad608069bf7466a68"} Dec 02 15:04:12 crc kubenswrapper[4900]: I1202 15:04:12.603688 4900 generic.go:334] "Generic (PLEG): container finished" podID="7ab07775-0cb1-44f7-b1e4-17f09ffca625" containerID="1a7cd29a36747baa0b5332263ec93604ab69ede92c369fb077833d62b08fc2ed" exitCode=0 Dec 02 15:04:12 crc kubenswrapper[4900]: I1202 15:04:12.603728 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"7ab07775-0cb1-44f7-b1e4-17f09ffca625","Type":"ContainerDied","Data":"1a7cd29a36747baa0b5332263ec93604ab69ede92c369fb077833d62b08fc2ed"} Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.053817 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.072227 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_7ab07775-0cb1-44f7-b1e4-17f09ffca625/mariadb-client-5-default/0.log" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.100825 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.108997 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.175367 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm9xn\" (UniqueName: \"kubernetes.io/projected/7ab07775-0cb1-44f7-b1e4-17f09ffca625-kube-api-access-gm9xn\") pod \"7ab07775-0cb1-44f7-b1e4-17f09ffca625\" (UID: \"7ab07775-0cb1-44f7-b1e4-17f09ffca625\") " Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.181438 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab07775-0cb1-44f7-b1e4-17f09ffca625-kube-api-access-gm9xn" (OuterVolumeSpecName: "kube-api-access-gm9xn") pod "7ab07775-0cb1-44f7-b1e4-17f09ffca625" (UID: "7ab07775-0cb1-44f7-b1e4-17f09ffca625"). InnerVolumeSpecName "kube-api-access-gm9xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.248237 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Dec 02 15:04:14 crc kubenswrapper[4900]: E1202 15:04:14.248527 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab07775-0cb1-44f7-b1e4-17f09ffca625" containerName="mariadb-client-5-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.248539 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab07775-0cb1-44f7-b1e4-17f09ffca625" containerName="mariadb-client-5-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.248696 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab07775-0cb1-44f7-b1e4-17f09ffca625" containerName="mariadb-client-5-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.249212 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.263222 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.278106 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndlb\" (UniqueName: \"kubernetes.io/projected/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b-kube-api-access-cndlb\") pod \"mariadb-client-6-default\" (UID: \"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b\") " pod="openstack/mariadb-client-6-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.278787 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm9xn\" (UniqueName: \"kubernetes.io/projected/7ab07775-0cb1-44f7-b1e4-17f09ffca625-kube-api-access-gm9xn\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.380122 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndlb\" (UniqueName: \"kubernetes.io/projected/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b-kube-api-access-cndlb\") pod \"mariadb-client-6-default\" (UID: \"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b\") " pod="openstack/mariadb-client-6-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.617572 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92b499e0fa19f2c9e0c90c84cdf1a6069b7223df3e2db91ad608069bf7466a68" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.617631 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.656435 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndlb\" (UniqueName: \"kubernetes.io/projected/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b-kube-api-access-cndlb\") pod \"mariadb-client-6-default\" (UID: \"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b\") " pod="openstack/mariadb-client-6-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.873377 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 02 15:04:14 crc kubenswrapper[4900]: I1202 15:04:14.926089 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab07775-0cb1-44f7-b1e4-17f09ffca625" path="/var/lib/kubelet/pods/7ab07775-0cb1-44f7-b1e4-17f09ffca625/volumes" Dec 02 15:04:15 crc kubenswrapper[4900]: I1202 15:04:15.385995 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 02 15:04:15 crc kubenswrapper[4900]: I1202 15:04:15.627763 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b","Type":"ContainerStarted","Data":"2316b256ea2b503f42a7d25505807b810d7e81c61d718091586106417c85b48c"} Dec 02 15:04:15 crc kubenswrapper[4900]: I1202 15:04:15.628066 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b","Type":"ContainerStarted","Data":"f611f82f7a7d938812bc78cd359abe7c29316695e7134150f219be48761a9519"} Dec 02 15:04:15 crc kubenswrapper[4900]: I1202 15:04:15.644078 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.644061352 podStartE2EDuration="1.644061352s" podCreationTimestamp="2025-12-02 15:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:04:15.64149728 +0000 UTC m=+4901.057311141" watchObservedRunningTime="2025-12-02 15:04:15.644061352 +0000 UTC m=+4901.059875203" Dec 02 15:04:16 crc kubenswrapper[4900]: I1202 15:04:16.638147 4900 generic.go:334] "Generic (PLEG): container finished" podID="abe4fa50-e9d4-4624-a819-7a1e3d0bff6b" containerID="2316b256ea2b503f42a7d25505807b810d7e81c61d718091586106417c85b48c" exitCode=1 Dec 02 15:04:16 crc kubenswrapper[4900]: I1202 15:04:16.638509 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b","Type":"ContainerDied","Data":"2316b256ea2b503f42a7d25505807b810d7e81c61d718091586106417c85b48c"} Dec 02 15:04:17 crc kubenswrapper[4900]: I1202 15:04:17.988992 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.019847 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.025650 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.039374 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cndlb\" (UniqueName: \"kubernetes.io/projected/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b-kube-api-access-cndlb\") pod \"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b\" (UID: \"abe4fa50-e9d4-4624-a819-7a1e3d0bff6b\") " Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.044889 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b-kube-api-access-cndlb" (OuterVolumeSpecName: "kube-api-access-cndlb") pod "abe4fa50-e9d4-4624-a819-7a1e3d0bff6b" (UID: "abe4fa50-e9d4-4624-a819-7a1e3d0bff6b"). InnerVolumeSpecName "kube-api-access-cndlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.142853 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cndlb\" (UniqueName: \"kubernetes.io/projected/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b-kube-api-access-cndlb\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.155511 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Dec 02 15:04:18 crc kubenswrapper[4900]: E1202 15:04:18.156079 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe4fa50-e9d4-4624-a819-7a1e3d0bff6b" containerName="mariadb-client-6-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.156102 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe4fa50-e9d4-4624-a819-7a1e3d0bff6b" containerName="mariadb-client-6-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.156259 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe4fa50-e9d4-4624-a819-7a1e3d0bff6b" containerName="mariadb-client-6-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.156758 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.163417 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.244469 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6m2z\" (UniqueName: \"kubernetes.io/projected/13644a74-859b-4be1-85bc-0e570c85a29f-kube-api-access-x6m2z\") pod \"mariadb-client-7-default\" (UID: \"13644a74-859b-4be1-85bc-0e570c85a29f\") " pod="openstack/mariadb-client-7-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.346820 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6m2z\" (UniqueName: \"kubernetes.io/projected/13644a74-859b-4be1-85bc-0e570c85a29f-kube-api-access-x6m2z\") pod \"mariadb-client-7-default\" (UID: \"13644a74-859b-4be1-85bc-0e570c85a29f\") " pod="openstack/mariadb-client-7-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.362368 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6m2z\" (UniqueName: \"kubernetes.io/projected/13644a74-859b-4be1-85bc-0e570c85a29f-kube-api-access-x6m2z\") pod \"mariadb-client-7-default\" (UID: \"13644a74-859b-4be1-85bc-0e570c85a29f\") " pod="openstack/mariadb-client-7-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.480586 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.654332 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f611f82f7a7d938812bc78cd359abe7c29316695e7134150f219be48761a9519" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.654747 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 02 15:04:18 crc kubenswrapper[4900]: I1202 15:04:18.926250 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe4fa50-e9d4-4624-a819-7a1e3d0bff6b" path="/var/lib/kubelet/pods/abe4fa50-e9d4-4624-a819-7a1e3d0bff6b/volumes" Dec 02 15:04:19 crc kubenswrapper[4900]: I1202 15:04:19.095947 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 02 15:04:19 crc kubenswrapper[4900]: I1202 15:04:19.664265 4900 generic.go:334] "Generic (PLEG): container finished" podID="13644a74-859b-4be1-85bc-0e570c85a29f" containerID="9c5a9818de0bba4699fe2598e6338a3b8b04e81d8dbc03a338003f377349e3a2" exitCode=0 Dec 02 15:04:19 crc kubenswrapper[4900]: I1202 15:04:19.664355 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"13644a74-859b-4be1-85bc-0e570c85a29f","Type":"ContainerDied","Data":"9c5a9818de0bba4699fe2598e6338a3b8b04e81d8dbc03a338003f377349e3a2"} Dec 02 15:04:19 crc kubenswrapper[4900]: I1202 15:04:19.664606 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"13644a74-859b-4be1-85bc-0e570c85a29f","Type":"ContainerStarted","Data":"4f2f834c8afdd10eb5e206b61c36307e4a4ea59f83a6440552e072fdf36a02ce"} Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.123361 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.142330 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_13644a74-859b-4be1-85bc-0e570c85a29f/mariadb-client-7-default/0.log" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.171384 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.176055 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.197537 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6m2z\" (UniqueName: \"kubernetes.io/projected/13644a74-859b-4be1-85bc-0e570c85a29f-kube-api-access-x6m2z\") pod \"13644a74-859b-4be1-85bc-0e570c85a29f\" (UID: \"13644a74-859b-4be1-85bc-0e570c85a29f\") " Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.203485 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13644a74-859b-4be1-85bc-0e570c85a29f-kube-api-access-x6m2z" (OuterVolumeSpecName: "kube-api-access-x6m2z") pod "13644a74-859b-4be1-85bc-0e570c85a29f" (UID: "13644a74-859b-4be1-85bc-0e570c85a29f"). InnerVolumeSpecName "kube-api-access-x6m2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.299690 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6m2z\" (UniqueName: \"kubernetes.io/projected/13644a74-859b-4be1-85bc-0e570c85a29f-kube-api-access-x6m2z\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.331803 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Dec 02 15:04:21 crc kubenswrapper[4900]: E1202 15:04:21.332217 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13644a74-859b-4be1-85bc-0e570c85a29f" containerName="mariadb-client-7-default" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.332235 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="13644a74-859b-4be1-85bc-0e570c85a29f" containerName="mariadb-client-7-default" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.332436 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="13644a74-859b-4be1-85bc-0e570c85a29f" containerName="mariadb-client-7-default" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.333050 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.343550 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.400780 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjcw9\" (UniqueName: \"kubernetes.io/projected/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83-kube-api-access-bjcw9\") pod \"mariadb-client-2\" (UID: \"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83\") " pod="openstack/mariadb-client-2" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.502426 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjcw9\" (UniqueName: \"kubernetes.io/projected/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83-kube-api-access-bjcw9\") pod \"mariadb-client-2\" (UID: \"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83\") " pod="openstack/mariadb-client-2" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.523951 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjcw9\" (UniqueName: \"kubernetes.io/projected/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83-kube-api-access-bjcw9\") pod \"mariadb-client-2\" (UID: \"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83\") " pod="openstack/mariadb-client-2" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.672141 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.684504 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2f834c8afdd10eb5e206b61c36307e4a4ea59f83a6440552e072fdf36a02ce" Dec 02 15:04:21 crc kubenswrapper[4900]: I1202 15:04:21.684588 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 02 15:04:22 crc kubenswrapper[4900]: I1202 15:04:22.259542 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 02 15:04:22 crc kubenswrapper[4900]: W1202 15:04:22.266401 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2c6a28a_e4da_4fd1_96a4_74198f2e5e83.slice/crio-0ea568dd431a69b1666a4308c218490ca223e967be57ca7370cc4a42cc0c1c2c WatchSource:0}: Error finding container 0ea568dd431a69b1666a4308c218490ca223e967be57ca7370cc4a42cc0c1c2c: Status 404 returned error can't find the container with id 0ea568dd431a69b1666a4308c218490ca223e967be57ca7370cc4a42cc0c1c2c Dec 02 15:04:22 crc kubenswrapper[4900]: I1202 15:04:22.698227 4900 generic.go:334] "Generic (PLEG): container finished" podID="c2c6a28a-e4da-4fd1-96a4-74198f2e5e83" containerID="fe7bdd2b76cc289dcb63dfb4b76177e0c6c4ae098f05034bba5ea3528063bf28" exitCode=0 Dec 02 15:04:22 crc kubenswrapper[4900]: I1202 15:04:22.698265 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83","Type":"ContainerDied","Data":"fe7bdd2b76cc289dcb63dfb4b76177e0c6c4ae098f05034bba5ea3528063bf28"} Dec 02 15:04:22 crc kubenswrapper[4900]: I1202 15:04:22.698292 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83","Type":"ContainerStarted","Data":"0ea568dd431a69b1666a4308c218490ca223e967be57ca7370cc4a42cc0c1c2c"} Dec 02 15:04:22 crc kubenswrapper[4900]: I1202 15:04:22.919763 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13644a74-859b-4be1-85bc-0e570c85a29f" path="/var/lib/kubelet/pods/13644a74-859b-4be1-85bc-0e570c85a29f/volumes" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.128880 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.150757 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_c2c6a28a-e4da-4fd1-96a4-74198f2e5e83/mariadb-client-2/0.log" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.172914 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.181216 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.243198 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjcw9\" (UniqueName: \"kubernetes.io/projected/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83-kube-api-access-bjcw9\") pod \"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83\" (UID: \"c2c6a28a-e4da-4fd1-96a4-74198f2e5e83\") " Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.248339 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83-kube-api-access-bjcw9" (OuterVolumeSpecName: "kube-api-access-bjcw9") pod "c2c6a28a-e4da-4fd1-96a4-74198f2e5e83" (UID: "c2c6a28a-e4da-4fd1-96a4-74198f2e5e83"). InnerVolumeSpecName "kube-api-access-bjcw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.347462 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjcw9\" (UniqueName: \"kubernetes.io/projected/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83-kube-api-access-bjcw9\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.712297 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea568dd431a69b1666a4308c218490ca223e967be57ca7370cc4a42cc0c1c2c" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.712390 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 02 15:04:24 crc kubenswrapper[4900]: I1202 15:04:24.917840 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c6a28a-e4da-4fd1-96a4-74198f2e5e83" path="/var/lib/kubelet/pods/c2c6a28a-e4da-4fd1-96a4-74198f2e5e83/volumes" Dec 02 15:05:00 crc kubenswrapper[4900]: I1202 15:05:00.537221 4900 scope.go:117] "RemoveContainer" containerID="73dc3ee87056981afdb2860c197420079ce2e31cf81b8d84f1a980efb08066cb" Dec 02 15:05:15 crc kubenswrapper[4900]: I1202 15:05:15.116555 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:05:15 crc kubenswrapper[4900]: I1202 15:05:15.118751 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:05:45 crc kubenswrapper[4900]: I1202 15:05:45.116544 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:05:45 crc kubenswrapper[4900]: I1202 15:05:45.117322 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.117370 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.118072 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.118142 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.119291 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.119401 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" gracePeriod=600 Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.699497 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" exitCode=0 Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.699774 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7"} Dec 02 15:06:15 crc kubenswrapper[4900]: I1202 15:06:15.699803 4900 scope.go:117] "RemoveContainer" containerID="b795cb18b3b3ac1e48eb43f789980baa8ed78f685f14ca23961f153619a7bd73" Dec 02 15:06:15 crc kubenswrapper[4900]: E1202 15:06:15.748604 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:06:16 crc kubenswrapper[4900]: I1202 15:06:16.713595 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:06:16 crc kubenswrapper[4900]: E1202 15:06:16.714022 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:06:26 crc kubenswrapper[4900]: I1202 15:06:26.910467 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:06:26 crc kubenswrapper[4900]: E1202 15:06:26.911558 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:06:37 crc kubenswrapper[4900]: I1202 15:06:37.910116 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:06:37 crc kubenswrapper[4900]: E1202 15:06:37.911389 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:06:49 crc kubenswrapper[4900]: I1202 15:06:49.910338 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:06:49 crc kubenswrapper[4900]: E1202 15:06:49.911175 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:07:00 crc kubenswrapper[4900]: I1202 15:07:00.910309 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:07:00 crc kubenswrapper[4900]: E1202 15:07:00.911154 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:07:15 crc kubenswrapper[4900]: I1202 15:07:15.910454 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:07:15 crc kubenswrapper[4900]: E1202 15:07:15.911171 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:07:30 crc kubenswrapper[4900]: I1202 15:07:30.912206 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:07:30 crc kubenswrapper[4900]: E1202 15:07:30.913312 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:07:45 crc kubenswrapper[4900]: I1202 15:07:45.909915 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:07:45 crc kubenswrapper[4900]: E1202 15:07:45.910788 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:07:58 crc kubenswrapper[4900]: I1202 15:07:58.910495 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:07:58 crc kubenswrapper[4900]: E1202 15:07:58.912587 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:08:09 crc kubenswrapper[4900]: I1202 15:08:09.912015 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:08:09 crc kubenswrapper[4900]: E1202 15:08:09.913724 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:08:20 crc kubenswrapper[4900]: I1202 15:08:20.910972 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:08:20 crc kubenswrapper[4900]: E1202 15:08:20.912169 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:08:32 crc kubenswrapper[4900]: I1202 15:08:32.910355 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:08:32 crc kubenswrapper[4900]: E1202 15:08:32.911060 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:08:45 crc kubenswrapper[4900]: I1202 15:08:45.910898 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:08:45 crc kubenswrapper[4900]: E1202 15:08:45.913423 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:08:57 crc kubenswrapper[4900]: I1202 15:08:57.917576 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Dec 02 15:08:57 crc kubenswrapper[4900]: E1202 15:08:57.918657 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c6a28a-e4da-4fd1-96a4-74198f2e5e83" containerName="mariadb-client-2" Dec 02 15:08:57 crc kubenswrapper[4900]: I1202 15:08:57.918676 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c6a28a-e4da-4fd1-96a4-74198f2e5e83" containerName="mariadb-client-2" Dec 02 15:08:57 crc kubenswrapper[4900]: I1202 15:08:57.918921 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c6a28a-e4da-4fd1-96a4-74198f2e5e83" containerName="mariadb-client-2" Dec 02 15:08:57 crc kubenswrapper[4900]: I1202 15:08:57.919513 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 02 15:08:57 crc kubenswrapper[4900]: I1202 15:08:57.922936 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rpf9s" Dec 02 15:08:57 crc kubenswrapper[4900]: I1202 15:08:57.938033 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.032889 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgj9r\" (UniqueName: \"kubernetes.io/projected/fde68498-38c9-4808-ae8e-91dde7a6d2c9-kube-api-access-fgj9r\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.032978 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.134045 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgj9r\" (UniqueName: \"kubernetes.io/projected/fde68498-38c9-4808-ae8e-91dde7a6d2c9-kube-api-access-fgj9r\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.134147 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.136910 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.136956 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cfc1569f003a03e4879fe646e3e3fad22dccb89ca0058392de870ed9d92e4162/globalmount\"" pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.158869 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgj9r\" (UniqueName: \"kubernetes.io/projected/fde68498-38c9-4808-ae8e-91dde7a6d2c9-kube-api-access-fgj9r\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.165315 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") pod \"mariadb-copy-data\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.260459 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.801032 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 02 15:08:58 crc kubenswrapper[4900]: I1202 15:08:58.910808 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:08:58 crc kubenswrapper[4900]: E1202 15:08:58.911305 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:08:59 crc kubenswrapper[4900]: I1202 15:08:59.211206 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fde68498-38c9-4808-ae8e-91dde7a6d2c9","Type":"ContainerStarted","Data":"e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe"} Dec 02 15:08:59 crc kubenswrapper[4900]: I1202 15:08:59.211254 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fde68498-38c9-4808-ae8e-91dde7a6d2c9","Type":"ContainerStarted","Data":"fe58b1aaced4f9d225c6ba628b17352494262d2a5fee85d5450308521241d4ae"} Dec 02 15:08:59 crc kubenswrapper[4900]: I1202 15:08:59.232852 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.232827663 podStartE2EDuration="3.232827663s" podCreationTimestamp="2025-12-02 15:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:08:59.229014756 +0000 UTC m=+5184.644828617" watchObservedRunningTime="2025-12-02 15:08:59.232827663 +0000 UTC m=+5184.648641514" Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.259502 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.261501 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.267818 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.299712 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxrt\" (UniqueName: \"kubernetes.io/projected/843836e9-e2fd-4759-ab24-51c6b61b3ff8-kube-api-access-8sxrt\") pod \"mariadb-client\" (UID: \"843836e9-e2fd-4759-ab24-51c6b61b3ff8\") " pod="openstack/mariadb-client" Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.409137 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxrt\" (UniqueName: \"kubernetes.io/projected/843836e9-e2fd-4759-ab24-51c6b61b3ff8-kube-api-access-8sxrt\") pod \"mariadb-client\" (UID: \"843836e9-e2fd-4759-ab24-51c6b61b3ff8\") " pod="openstack/mariadb-client" Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.433675 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxrt\" (UniqueName: \"kubernetes.io/projected/843836e9-e2fd-4759-ab24-51c6b61b3ff8-kube-api-access-8sxrt\") pod \"mariadb-client\" (UID: \"843836e9-e2fd-4759-ab24-51c6b61b3ff8\") " pod="openstack/mariadb-client" Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.593154 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:02 crc kubenswrapper[4900]: I1202 15:09:02.874510 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:02 crc kubenswrapper[4900]: W1202 15:09:02.886168 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843836e9_e2fd_4759_ab24_51c6b61b3ff8.slice/crio-9ff1b19cc9c9c3626e1bd655a87fcfce044f8eaf4a6af63fbb69db68bf3ce9a6 WatchSource:0}: Error finding container 9ff1b19cc9c9c3626e1bd655a87fcfce044f8eaf4a6af63fbb69db68bf3ce9a6: Status 404 returned error can't find the container with id 9ff1b19cc9c9c3626e1bd655a87fcfce044f8eaf4a6af63fbb69db68bf3ce9a6 Dec 02 15:09:03 crc kubenswrapper[4900]: I1202 15:09:03.258784 4900 generic.go:334] "Generic (PLEG): container finished" podID="843836e9-e2fd-4759-ab24-51c6b61b3ff8" containerID="5cdc9341e13a8b279f836a5942fe276d1da13d657443254d14556d294d58b95a" exitCode=0 Dec 02 15:09:03 crc kubenswrapper[4900]: I1202 15:09:03.258924 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"843836e9-e2fd-4759-ab24-51c6b61b3ff8","Type":"ContainerDied","Data":"5cdc9341e13a8b279f836a5942fe276d1da13d657443254d14556d294d58b95a"} Dec 02 15:09:03 crc kubenswrapper[4900]: I1202 15:09:03.259241 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"843836e9-e2fd-4759-ab24-51c6b61b3ff8","Type":"ContainerStarted","Data":"9ff1b19cc9c9c3626e1bd655a87fcfce044f8eaf4a6af63fbb69db68bf3ce9a6"} Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.563483 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.583037 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_843836e9-e2fd-4759-ab24-51c6b61b3ff8/mariadb-client/0.log" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.615552 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.622149 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.663395 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sxrt\" (UniqueName: \"kubernetes.io/projected/843836e9-e2fd-4759-ab24-51c6b61b3ff8-kube-api-access-8sxrt\") pod \"843836e9-e2fd-4759-ab24-51c6b61b3ff8\" (UID: \"843836e9-e2fd-4759-ab24-51c6b61b3ff8\") " Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.669063 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843836e9-e2fd-4759-ab24-51c6b61b3ff8-kube-api-access-8sxrt" (OuterVolumeSpecName: "kube-api-access-8sxrt") pod "843836e9-e2fd-4759-ab24-51c6b61b3ff8" (UID: "843836e9-e2fd-4759-ab24-51c6b61b3ff8"). InnerVolumeSpecName "kube-api-access-8sxrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.755742 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:04 crc kubenswrapper[4900]: E1202 15:09:04.756295 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843836e9-e2fd-4759-ab24-51c6b61b3ff8" containerName="mariadb-client" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.756314 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="843836e9-e2fd-4759-ab24-51c6b61b3ff8" containerName="mariadb-client" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.756516 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="843836e9-e2fd-4759-ab24-51c6b61b3ff8" containerName="mariadb-client" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.757093 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.763811 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.764917 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sxrt\" (UniqueName: \"kubernetes.io/projected/843836e9-e2fd-4759-ab24-51c6b61b3ff8-kube-api-access-8sxrt\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.866834 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xckc2\" (UniqueName: \"kubernetes.io/projected/3d830c2b-5ac4-4489-abff-3fd320f807f7-kube-api-access-xckc2\") pod \"mariadb-client\" (UID: \"3d830c2b-5ac4-4489-abff-3fd320f807f7\") " pod="openstack/mariadb-client" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.921032 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843836e9-e2fd-4759-ab24-51c6b61b3ff8" path="/var/lib/kubelet/pods/843836e9-e2fd-4759-ab24-51c6b61b3ff8/volumes" Dec 02 15:09:04 crc kubenswrapper[4900]: I1202 15:09:04.969024 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xckc2\" (UniqueName: \"kubernetes.io/projected/3d830c2b-5ac4-4489-abff-3fd320f807f7-kube-api-access-xckc2\") pod \"mariadb-client\" (UID: \"3d830c2b-5ac4-4489-abff-3fd320f807f7\") " pod="openstack/mariadb-client" Dec 02 15:09:05 crc kubenswrapper[4900]: I1202 15:09:05.000295 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xckc2\" (UniqueName: \"kubernetes.io/projected/3d830c2b-5ac4-4489-abff-3fd320f807f7-kube-api-access-xckc2\") pod \"mariadb-client\" (UID: \"3d830c2b-5ac4-4489-abff-3fd320f807f7\") " pod="openstack/mariadb-client" Dec 02 15:09:05 crc kubenswrapper[4900]: I1202 15:09:05.078388 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:05 crc kubenswrapper[4900]: I1202 15:09:05.276782 4900 scope.go:117] "RemoveContainer" containerID="5cdc9341e13a8b279f836a5942fe276d1da13d657443254d14556d294d58b95a" Dec 02 15:09:05 crc kubenswrapper[4900]: I1202 15:09:05.276824 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:05 crc kubenswrapper[4900]: I1202 15:09:05.532545 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:05 crc kubenswrapper[4900]: W1202 15:09:05.538998 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d830c2b_5ac4_4489_abff_3fd320f807f7.slice/crio-d51294229d662a263039b0638013614cfef93013f30536642fa6e8a51ebc377a WatchSource:0}: Error finding container d51294229d662a263039b0638013614cfef93013f30536642fa6e8a51ebc377a: Status 404 returned error can't find the container with id d51294229d662a263039b0638013614cfef93013f30536642fa6e8a51ebc377a Dec 02 15:09:06 crc kubenswrapper[4900]: I1202 15:09:06.290761 4900 generic.go:334] "Generic (PLEG): container finished" podID="3d830c2b-5ac4-4489-abff-3fd320f807f7" containerID="10d4cf8329f1fd707442acf67a1e32113932d8ab6b32f1c4af4a69eb890f7c87" exitCode=0 Dec 02 15:09:06 crc kubenswrapper[4900]: I1202 15:09:06.290875 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3d830c2b-5ac4-4489-abff-3fd320f807f7","Type":"ContainerDied","Data":"10d4cf8329f1fd707442acf67a1e32113932d8ab6b32f1c4af4a69eb890f7c87"} Dec 02 15:09:06 crc kubenswrapper[4900]: I1202 15:09:06.291284 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3d830c2b-5ac4-4489-abff-3fd320f807f7","Type":"ContainerStarted","Data":"d51294229d662a263039b0638013614cfef93013f30536642fa6e8a51ebc377a"} Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.686891 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.706655 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_3d830c2b-5ac4-4489-abff-3fd320f807f7/mariadb-client/0.log" Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.732329 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xckc2\" (UniqueName: \"kubernetes.io/projected/3d830c2b-5ac4-4489-abff-3fd320f807f7-kube-api-access-xckc2\") pod \"3d830c2b-5ac4-4489-abff-3fd320f807f7\" (UID: \"3d830c2b-5ac4-4489-abff-3fd320f807f7\") " Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.747266 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.757063 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.761481 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d830c2b-5ac4-4489-abff-3fd320f807f7-kube-api-access-xckc2" (OuterVolumeSpecName: "kube-api-access-xckc2") pod "3d830c2b-5ac4-4489-abff-3fd320f807f7" (UID: "3d830c2b-5ac4-4489-abff-3fd320f807f7"). InnerVolumeSpecName "kube-api-access-xckc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:09:07 crc kubenswrapper[4900]: I1202 15:09:07.833772 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xckc2\" (UniqueName: \"kubernetes.io/projected/3d830c2b-5ac4-4489-abff-3fd320f807f7-kube-api-access-xckc2\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:08 crc kubenswrapper[4900]: I1202 15:09:08.325537 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51294229d662a263039b0638013614cfef93013f30536642fa6e8a51ebc377a" Dec 02 15:09:08 crc kubenswrapper[4900]: I1202 15:09:08.325590 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 02 15:09:08 crc kubenswrapper[4900]: I1202 15:09:08.922791 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d830c2b-5ac4-4489-abff-3fd320f807f7" path="/var/lib/kubelet/pods/3d830c2b-5ac4-4489-abff-3fd320f807f7/volumes" Dec 02 15:09:12 crc kubenswrapper[4900]: I1202 15:09:12.911016 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:09:12 crc kubenswrapper[4900]: E1202 15:09:12.913485 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:09:26 crc kubenswrapper[4900]: I1202 15:09:26.909759 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:09:26 crc kubenswrapper[4900]: E1202 15:09:26.910439 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.652381 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 15:09:41 crc kubenswrapper[4900]: E1202 15:09:41.653370 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d830c2b-5ac4-4489-abff-3fd320f807f7" containerName="mariadb-client" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.653386 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d830c2b-5ac4-4489-abff-3fd320f807f7" containerName="mariadb-client" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.653567 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d830c2b-5ac4-4489-abff-3fd320f807f7" containerName="mariadb-client" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.654506 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.657838 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rdz67" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.658269 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.658267 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.679325 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.688153 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.702618 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.705011 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.705242 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.708876 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.727259 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.806970 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4633b2f4-6b48-4454-87b9-cde7d972b4c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807043 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48b6a8cf-d942-4104-96e8-357ecf994aa2-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807077 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d5f722-205d-4bbc-867e-d730e746aaed-config\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807105 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4633b2f4-6b48-4454-87b9-cde7d972b4c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807148 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807294 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5pt\" (UniqueName: \"kubernetes.io/projected/4633b2f4-6b48-4454-87b9-cde7d972b4c4-kube-api-access-sj5pt\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807406 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22d5f722-205d-4bbc-867e-d730e746aaed-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807439 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5f722-205d-4bbc-867e-d730e746aaed-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807499 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbkm\" (UniqueName: \"kubernetes.io/projected/22d5f722-205d-4bbc-867e-d730e746aaed-kube-api-access-bzbkm\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807542 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-169ccadc-9172-4368-b546-73aa861ac1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-169ccadc-9172-4368-b546-73aa861ac1f4\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807579 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b6a8cf-d942-4104-96e8-357ecf994aa2-config\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807617 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgz2\" (UniqueName: \"kubernetes.io/projected/48b6a8cf-d942-4104-96e8-357ecf994aa2-kube-api-access-msgz2\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807665 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4633b2f4-6b48-4454-87b9-cde7d972b4c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807744 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.807945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b6a8cf-d942-4104-96e8-357ecf994aa2-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.808018 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4633b2f4-6b48-4454-87b9-cde7d972b4c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.808054 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b6a8cf-d942-4104-96e8-357ecf994aa2-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.808143 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22d5f722-205d-4bbc-867e-d730e746aaed-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.880250 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.887174 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.895596 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.897768 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.903216 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.904101 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.904509 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-52ngn" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.908781 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.910527 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911329 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4633b2f4-6b48-4454-87b9-cde7d972b4c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911457 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48b6a8cf-d942-4104-96e8-357ecf994aa2-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911496 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d5f722-205d-4bbc-867e-d730e746aaed-config\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911514 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4633b2f4-6b48-4454-87b9-cde7d972b4c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911554 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911586 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5pt\" (UniqueName: \"kubernetes.io/projected/4633b2f4-6b48-4454-87b9-cde7d972b4c4-kube-api-access-sj5pt\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911662 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22d5f722-205d-4bbc-867e-d730e746aaed-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911688 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5f722-205d-4bbc-867e-d730e746aaed-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911711 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzbkm\" (UniqueName: \"kubernetes.io/projected/22d5f722-205d-4bbc-867e-d730e746aaed-kube-api-access-bzbkm\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911737 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-169ccadc-9172-4368-b546-73aa861ac1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-169ccadc-9172-4368-b546-73aa861ac1f4\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911768 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b6a8cf-d942-4104-96e8-357ecf994aa2-config\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911789 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911808 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msgz2\" (UniqueName: \"kubernetes.io/projected/48b6a8cf-d942-4104-96e8-357ecf994aa2-kube-api-access-msgz2\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911843 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4633b2f4-6b48-4454-87b9-cde7d972b4c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911873 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911904 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b6a8cf-d942-4104-96e8-357ecf994aa2-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911924 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4633b2f4-6b48-4454-87b9-cde7d972b4c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911943 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b6a8cf-d942-4104-96e8-357ecf994aa2-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.911966 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22d5f722-205d-4bbc-867e-d730e746aaed-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: E1202 15:09:41.912062 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.912103 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48b6a8cf-d942-4104-96e8-357ecf994aa2-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.912481 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4633b2f4-6b48-4454-87b9-cde7d972b4c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.913113 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4633b2f4-6b48-4454-87b9-cde7d972b4c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.916551 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4633b2f4-6b48-4454-87b9-cde7d972b4c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.917828 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b6a8cf-d942-4104-96e8-357ecf994aa2-config\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.918086 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22d5f722-205d-4bbc-867e-d730e746aaed-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.918630 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.918701 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/198beb595cad54ad535d04d6d87aeca99dbb0bd1c8169c8cb5f1d470c4e9464d/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.919097 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.919128 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8c567c80d9b4f37d24782dbab4c586262dfb568c8a3f818c1f182e59b1aff00/globalmount\"" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.919225 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.919256 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-169ccadc-9172-4368-b546-73aa861ac1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-169ccadc-9172-4368-b546-73aa861ac1f4\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc67f3541718e95aee3a3995ec2a593c82e782404af705569409b493039a4343/globalmount\"" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.921051 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d5f722-205d-4bbc-867e-d730e746aaed-config\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.923719 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b6a8cf-d942-4104-96e8-357ecf994aa2-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.923764 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.927868 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b6a8cf-d942-4104-96e8-357ecf994aa2-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.929424 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4633b2f4-6b48-4454-87b9-cde7d972b4c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.930002 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22d5f722-205d-4bbc-867e-d730e746aaed-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.932488 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5pt\" (UniqueName: \"kubernetes.io/projected/4633b2f4-6b48-4454-87b9-cde7d972b4c4-kube-api-access-sj5pt\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.933409 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5f722-205d-4bbc-867e-d730e746aaed-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.935245 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzbkm\" (UniqueName: \"kubernetes.io/projected/22d5f722-205d-4bbc-867e-d730e746aaed-kube-api-access-bzbkm\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.938505 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.941928 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msgz2\" (UniqueName: \"kubernetes.io/projected/48b6a8cf-d942-4104-96e8-357ecf994aa2-kube-api-access-msgz2\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.949929 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.979923 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6202f366-088e-42bf-ac2b-830c5ddc58f3\") pod \"ovsdbserver-nb-1\" (UID: \"22d5f722-205d-4bbc-867e-d730e746aaed\") " pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.980009 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-169ccadc-9172-4368-b546-73aa861ac1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-169ccadc-9172-4368-b546-73aa861ac1f4\") pod \"ovsdbserver-nb-2\" (UID: \"48b6a8cf-d942-4104-96e8-357ecf994aa2\") " pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.976719 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d53ebe6-ba53-460c-97d6-b2b639769ea4\") pod \"ovsdbserver-nb-0\" (UID: \"4633b2f4-6b48-4454-87b9-cde7d972b4c4\") " pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:41 crc kubenswrapper[4900]: I1202 15:09:41.984579 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.033490 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.044295 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.114048 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377fd1c0-f510-425f-b335-d38287d30c30-config\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115110 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377fd1c0-f510-425f-b335-d38287d30c30-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115139 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86e01a9-36e7-4b06-af54-fa63c1587435-config\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115168 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377fd1c0-f510-425f-b335-d38287d30c30-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115215 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dde891a-1ff3-45cb-b977-7e3103cc4795-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115234 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dde891a-1ff3-45cb-b977-7e3103cc4795-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115254 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86e01a9-36e7-4b06-af54-fa63c1587435-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115283 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42zc\" (UniqueName: \"kubernetes.io/projected/6dde891a-1ff3-45cb-b977-7e3103cc4795-kube-api-access-q42zc\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115308 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d86e01a9-36e7-4b06-af54-fa63c1587435-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115326 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4dk\" (UniqueName: \"kubernetes.io/projected/377fd1c0-f510-425f-b335-d38287d30c30-kube-api-access-pm4dk\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115348 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86e01a9-36e7-4b06-af54-fa63c1587435-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115370 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec89fc49-d938-447e-80be-35c752b418ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec89fc49-d938-447e-80be-35c752b418ab\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115389 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dde891a-1ff3-45cb-b977-7e3103cc4795-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115409 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/377fd1c0-f510-425f-b335-d38287d30c30-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115426 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115449 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dde891a-1ff3-45cb-b977-7e3103cc4795-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115464 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhrg\" (UniqueName: \"kubernetes.io/projected/d86e01a9-36e7-4b06-af54-fa63c1587435-kube-api-access-vrhrg\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.115482 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217436 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377fd1c0-f510-425f-b335-d38287d30c30-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217549 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dde891a-1ff3-45cb-b977-7e3103cc4795-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217579 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dde891a-1ff3-45cb-b977-7e3103cc4795-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217601 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86e01a9-36e7-4b06-af54-fa63c1587435-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217660 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q42zc\" (UniqueName: \"kubernetes.io/projected/6dde891a-1ff3-45cb-b977-7e3103cc4795-kube-api-access-q42zc\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217693 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d86e01a9-36e7-4b06-af54-fa63c1587435-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217714 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm4dk\" (UniqueName: \"kubernetes.io/projected/377fd1c0-f510-425f-b335-d38287d30c30-kube-api-access-pm4dk\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217741 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86e01a9-36e7-4b06-af54-fa63c1587435-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217776 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec89fc49-d938-447e-80be-35c752b418ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec89fc49-d938-447e-80be-35c752b418ab\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217804 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dde891a-1ff3-45cb-b977-7e3103cc4795-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217831 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/377fd1c0-f510-425f-b335-d38287d30c30-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217857 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217895 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhrg\" (UniqueName: \"kubernetes.io/projected/d86e01a9-36e7-4b06-af54-fa63c1587435-kube-api-access-vrhrg\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217917 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dde891a-1ff3-45cb-b977-7e3103cc4795-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217950 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.217977 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377fd1c0-f510-425f-b335-d38287d30c30-config\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.218003 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377fd1c0-f510-425f-b335-d38287d30c30-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.218032 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86e01a9-36e7-4b06-af54-fa63c1587435-config\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.219206 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86e01a9-36e7-4b06-af54-fa63c1587435-config\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.219865 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dde891a-1ff3-45cb-b977-7e3103cc4795-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.221402 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/377fd1c0-f510-425f-b335-d38287d30c30-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.221462 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dde891a-1ff3-45cb-b977-7e3103cc4795-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.221640 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86e01a9-36e7-4b06-af54-fa63c1587435-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.221734 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.221965 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec89fc49-d938-447e-80be-35c752b418ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec89fc49-d938-447e-80be-35c752b418ab\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2eb52eb27219f989c125d52e75b5e745aafaf0fe480dec357b1080322173f895/globalmount\"" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.221812 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/377fd1c0-f510-425f-b335-d38287d30c30-config\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.222101 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d86e01a9-36e7-4b06-af54-fa63c1587435-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.224749 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.224794 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5ec5e00bcd6226e02c20a325d54aeb501a36a376b8d60657e080c90eea100bc/globalmount\"" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.225066 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dde891a-1ff3-45cb-b977-7e3103cc4795-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.225227 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.225235 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377fd1c0-f510-425f-b335-d38287d30c30-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.225263 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5bf79cfe759103e1148b75299264648481b4c120f5c4f4bfb0d01fd22dc90a10/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.226370 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/377fd1c0-f510-425f-b335-d38287d30c30-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.231499 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dde891a-1ff3-45cb-b977-7e3103cc4795-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.233027 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86e01a9-36e7-4b06-af54-fa63c1587435-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.235587 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42zc\" (UniqueName: \"kubernetes.io/projected/6dde891a-1ff3-45cb-b977-7e3103cc4795-kube-api-access-q42zc\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.237982 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhrg\" (UniqueName: \"kubernetes.io/projected/d86e01a9-36e7-4b06-af54-fa63c1587435-kube-api-access-vrhrg\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.245517 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm4dk\" (UniqueName: \"kubernetes.io/projected/377fd1c0-f510-425f-b335-d38287d30c30-kube-api-access-pm4dk\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.261779 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bf1b473-c3cf-4555-89a3-befffa44c8fb\") pod \"ovsdbserver-sb-0\" (UID: \"6dde891a-1ff3-45cb-b977-7e3103cc4795\") " pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.265265 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec89fc49-d938-447e-80be-35c752b418ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec89fc49-d938-447e-80be-35c752b418ab\") pod \"ovsdbserver-sb-1\" (UID: \"377fd1c0-f510-425f-b335-d38287d30c30\") " pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.267674 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98f08d26-eee3-4fd0-bd76-dc1d9053bb02\") pod \"ovsdbserver-sb-2\" (UID: \"d86e01a9-36e7-4b06-af54-fa63c1587435\") " pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.397865 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.403852 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.498393 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.514106 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:42 crc kubenswrapper[4900]: W1202 15:09:42.539882 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4633b2f4_6b48_4454_87b9_cde7d972b4c4.slice/crio-d5280d722e5f81416d562ca319d051fd390726a02ac16534abe154e952be7fdb WatchSource:0}: Error finding container d5280d722e5f81416d562ca319d051fd390726a02ac16534abe154e952be7fdb: Status 404 returned error can't find the container with id d5280d722e5f81416d562ca319d051fd390726a02ac16534abe154e952be7fdb Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.617446 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 02 15:09:42 crc kubenswrapper[4900]: W1202 15:09:42.625419 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48b6a8cf_d942_4104_96e8_357ecf994aa2.slice/crio-358e6d634d67c784ca2e5195761d9816b5dcfa46ff2d36121f5ba211a5527c86 WatchSource:0}: Error finding container 358e6d634d67c784ca2e5195761d9816b5dcfa46ff2d36121f5ba211a5527c86: Status 404 returned error can't find the container with id 358e6d634d67c784ca2e5195761d9816b5dcfa46ff2d36121f5ba211a5527c86 Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.631259 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4633b2f4-6b48-4454-87b9-cde7d972b4c4","Type":"ContainerStarted","Data":"d5280d722e5f81416d562ca319d051fd390726a02ac16534abe154e952be7fdb"} Dec 02 15:09:42 crc kubenswrapper[4900]: I1202 15:09:42.984753 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.069972 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.170243 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 02 15:09:43 crc kubenswrapper[4900]: W1202 15:09:43.176239 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d5f722_205d_4bbc_867e_d730e746aaed.slice/crio-dd1d7bb9b4b4ca52de34267deafa0b0cfd4cb6c4f1917696a6506e5909bca1fe WatchSource:0}: Error finding container dd1d7bb9b4b4ca52de34267deafa0b0cfd4cb6c4f1917696a6506e5909bca1fe: Status 404 returned error can't find the container with id dd1d7bb9b4b4ca52de34267deafa0b0cfd4cb6c4f1917696a6506e5909bca1fe Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.603491 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 15:09:43 crc kubenswrapper[4900]: W1202 15:09:43.605177 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dde891a_1ff3_45cb_b977_7e3103cc4795.slice/crio-94e91865169be8589be9dfb1f97481a6b581ecf77241893d68aa56467dc1fc86 WatchSource:0}: Error finding container 94e91865169be8589be9dfb1f97481a6b581ecf77241893d68aa56467dc1fc86: Status 404 returned error can't find the container with id 94e91865169be8589be9dfb1f97481a6b581ecf77241893d68aa56467dc1fc86 Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.642807 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4633b2f4-6b48-4454-87b9-cde7d972b4c4","Type":"ContainerStarted","Data":"f3613037f17f6e556a4048f44c293b24c28fcaec8db392de86ba3ca098860dce"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.642853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4633b2f4-6b48-4454-87b9-cde7d972b4c4","Type":"ContainerStarted","Data":"13b9fa3cc9700d04fc2412387126837133c7e1bd7e038e50df34e8d1322ae128"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.645779 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"48b6a8cf-d942-4104-96e8-357ecf994aa2","Type":"ContainerStarted","Data":"1c9f69ae403410772057843df773689bfdc8d4173d45b7c6d1a6f31b8633c52d"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.645837 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"48b6a8cf-d942-4104-96e8-357ecf994aa2","Type":"ContainerStarted","Data":"eabde75044b858b31036fb7c3d912244f9b43dafa2655591737dcd3172346eef"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.645850 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"48b6a8cf-d942-4104-96e8-357ecf994aa2","Type":"ContainerStarted","Data":"358e6d634d67c784ca2e5195761d9816b5dcfa46ff2d36121f5ba211a5527c86"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.648946 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"22d5f722-205d-4bbc-867e-d730e746aaed","Type":"ContainerStarted","Data":"df6bbea8230fccf0e8997ff7cd2b3b41a584037d4a40d70a71a86caffc3430cf"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.649021 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"22d5f722-205d-4bbc-867e-d730e746aaed","Type":"ContainerStarted","Data":"2d91bb3e9ec47419f151cb5438504a0a1f870f787cbfab82b2972a77c3b9cef0"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.649033 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"22d5f722-205d-4bbc-867e-d730e746aaed","Type":"ContainerStarted","Data":"dd1d7bb9b4b4ca52de34267deafa0b0cfd4cb6c4f1917696a6506e5909bca1fe"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.650886 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d86e01a9-36e7-4b06-af54-fa63c1587435","Type":"ContainerStarted","Data":"0dda7ef2487c25697df18aa00e63243038c5e30c4420c6ed164d70700169b47d"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.650931 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d86e01a9-36e7-4b06-af54-fa63c1587435","Type":"ContainerStarted","Data":"3d8324d6d546ad3a830ccdf7c85d284051227b18f973b6e98e9ea75a79a5fcc9"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.650942 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d86e01a9-36e7-4b06-af54-fa63c1587435","Type":"ContainerStarted","Data":"d560ae0dcd875db41f7dc64061f782dc1c0771b0d84f0d75b02feebd95d9c24a"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.652758 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dde891a-1ff3-45cb-b977-7e3103cc4795","Type":"ContainerStarted","Data":"94e91865169be8589be9dfb1f97481a6b581ecf77241893d68aa56467dc1fc86"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.654317 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"377fd1c0-f510-425f-b335-d38287d30c30","Type":"ContainerStarted","Data":"0b71863a790c74e261bdd9118cb09b1e2dfdb6a98b7ed7e390c670291cc95dfe"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.654339 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"377fd1c0-f510-425f-b335-d38287d30c30","Type":"ContainerStarted","Data":"0d9811f7d1be062756be7017ab63e418bbb48c2129c5f4f19e1191758ce2216f"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.654348 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"377fd1c0-f510-425f-b335-d38287d30c30","Type":"ContainerStarted","Data":"58dde965785cc8cc8a17dfb3a2944cb34b40be092c59d74e294628f7a25ef3ae"} Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.669820 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.669801928 podStartE2EDuration="3.669801928s" podCreationTimestamp="2025-12-02 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:43.661233477 +0000 UTC m=+5229.077047338" watchObservedRunningTime="2025-12-02 15:09:43.669801928 +0000 UTC m=+5229.085615769" Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.680410 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.680392515 podStartE2EDuration="3.680392515s" podCreationTimestamp="2025-12-02 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:43.680352124 +0000 UTC m=+5229.096165975" watchObservedRunningTime="2025-12-02 15:09:43.680392515 +0000 UTC m=+5229.096206366" Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.702189 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.702169176 podStartE2EDuration="3.702169176s" podCreationTimestamp="2025-12-02 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:43.699420139 +0000 UTC m=+5229.115234040" watchObservedRunningTime="2025-12-02 15:09:43.702169176 +0000 UTC m=+5229.117983027" Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.722756 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.722734833 podStartE2EDuration="3.722734833s" podCreationTimestamp="2025-12-02 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:43.714469511 +0000 UTC m=+5229.130283382" watchObservedRunningTime="2025-12-02 15:09:43.722734833 +0000 UTC m=+5229.138548684" Dec 02 15:09:43 crc kubenswrapper[4900]: I1202 15:09:43.739911 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.7398860640000002 podStartE2EDuration="3.739886064s" podCreationTimestamp="2025-12-02 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:43.735023518 +0000 UTC m=+5229.150837369" watchObservedRunningTime="2025-12-02 15:09:43.739886064 +0000 UTC m=+5229.155699915" Dec 02 15:09:44 crc kubenswrapper[4900]: I1202 15:09:44.669557 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dde891a-1ff3-45cb-b977-7e3103cc4795","Type":"ContainerStarted","Data":"96873b0b16fc4e8526a49de958e7cbb14a99de6872bd7e69a4b191bfb0461fca"} Dec 02 15:09:44 crc kubenswrapper[4900]: I1202 15:09:44.669998 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dde891a-1ff3-45cb-b977-7e3103cc4795","Type":"ContainerStarted","Data":"68000014f3bd97db59be6dd0d380663d93449fdbe7bbe2533ac6e54de4111657"} Dec 02 15:09:44 crc kubenswrapper[4900]: I1202 15:09:44.704013 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.703974811 podStartE2EDuration="4.703974811s" podCreationTimestamp="2025-12-02 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:44.688080675 +0000 UTC m=+5230.103894526" watchObservedRunningTime="2025-12-02 15:09:44.703974811 +0000 UTC m=+5230.119788702" Dec 02 15:09:44 crc kubenswrapper[4900]: I1202 15:09:44.985919 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.034673 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.045736 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.107096 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.398834 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.404739 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.515056 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:45 crc kubenswrapper[4900]: I1202 15:09:45.681306 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:46 crc kubenswrapper[4900]: I1202 15:09:46.985897 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.045461 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.101837 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.399889 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.405492 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.433222 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8578f89889-bth4v"] Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.434602 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.436926 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.450828 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8578f89889-bth4v"] Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.515579 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.555917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnq9v\" (UniqueName: \"kubernetes.io/projected/709cb224-f4db-4abc-a2de-58b00284f74e-kube-api-access-jnq9v\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.556086 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-config\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.556125 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-ovsdbserver-nb\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.556209 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-dns-svc\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.657976 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnq9v\" (UniqueName: \"kubernetes.io/projected/709cb224-f4db-4abc-a2de-58b00284f74e-kube-api-access-jnq9v\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.658422 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-config\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.658463 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-ovsdbserver-nb\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.658520 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-dns-svc\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.659325 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-dns-svc\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.659620 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-config\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.659864 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-ovsdbserver-nb\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.688849 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnq9v\" (UniqueName: \"kubernetes.io/projected/709cb224-f4db-4abc-a2de-58b00284f74e-kube-api-access-jnq9v\") pod \"dnsmasq-dns-8578f89889-bth4v\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.769910 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:47 crc kubenswrapper[4900]: I1202 15:09:47.997292 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8578f89889-bth4v"] Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.047332 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.095599 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.118836 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.164300 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.443920 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.460707 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.503854 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.535977 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.618797 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.723297 4900 generic.go:334] "Generic (PLEG): container finished" podID="709cb224-f4db-4abc-a2de-58b00284f74e" containerID="554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8" exitCode=0 Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.725431 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8578f89889-bth4v" event={"ID":"709cb224-f4db-4abc-a2de-58b00284f74e","Type":"ContainerDied","Data":"554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8"} Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.726879 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8578f89889-bth4v"] Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.726972 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8578f89889-bth4v" event={"ID":"709cb224-f4db-4abc-a2de-58b00284f74e","Type":"ContainerStarted","Data":"dd5b73ce4b431ea7eac30eb7a4acdb3d89f010f99b3db7b199a39a93fc157155"} Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.760479 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ddc6654df-dr622"] Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.763603 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.769970 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.778373 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ddc6654df-dr622"] Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.802750 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.982017 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-sb\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.982239 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-dns-svc\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.983065 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsmpp\" (UniqueName: \"kubernetes.io/projected/36b9b506-2cd8-4957-ae76-05e28ed9b74c-kube-api-access-nsmpp\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.983129 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-config\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:48 crc kubenswrapper[4900]: I1202 15:09:48.983158 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-nb\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.085049 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsmpp\" (UniqueName: \"kubernetes.io/projected/36b9b506-2cd8-4957-ae76-05e28ed9b74c-kube-api-access-nsmpp\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.085145 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-config\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.085181 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-nb\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.085252 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-sb\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.085335 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-dns-svc\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.086288 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-nb\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.086473 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-dns-svc\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.087001 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-sb\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.087041 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-config\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.107246 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsmpp\" (UniqueName: \"kubernetes.io/projected/36b9b506-2cd8-4957-ae76-05e28ed9b74c-kube-api-access-nsmpp\") pod \"dnsmasq-dns-6ddc6654df-dr622\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.140194 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.639881 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ddc6654df-dr622"] Dec 02 15:09:49 crc kubenswrapper[4900]: W1202 15:09:49.646989 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b9b506_2cd8_4957_ae76_05e28ed9b74c.slice/crio-81eda33fdba050df1bcf77ed5a66d5881860cfd0c1bf3b7f21b158109f0da54c WatchSource:0}: Error finding container 81eda33fdba050df1bcf77ed5a66d5881860cfd0c1bf3b7f21b158109f0da54c: Status 404 returned error can't find the container with id 81eda33fdba050df1bcf77ed5a66d5881860cfd0c1bf3b7f21b158109f0da54c Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.736916 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" event={"ID":"36b9b506-2cd8-4957-ae76-05e28ed9b74c","Type":"ContainerStarted","Data":"81eda33fdba050df1bcf77ed5a66d5881860cfd0c1bf3b7f21b158109f0da54c"} Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.740325 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8578f89889-bth4v" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" containerName="dnsmasq-dns" containerID="cri-o://8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e" gracePeriod=10 Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.740616 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8578f89889-bth4v" event={"ID":"709cb224-f4db-4abc-a2de-58b00284f74e","Type":"ContainerStarted","Data":"8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e"} Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.740676 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:49 crc kubenswrapper[4900]: I1202 15:09:49.773441 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8578f89889-bth4v" podStartSLOduration=2.7734118309999998 podStartE2EDuration="2.773411831s" podCreationTimestamp="2025-12-02 15:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:49.755495289 +0000 UTC m=+5235.171309140" watchObservedRunningTime="2025-12-02 15:09:49.773411831 +0000 UTC m=+5235.189225712" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.186152 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.322264 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-dns-svc\") pod \"709cb224-f4db-4abc-a2de-58b00284f74e\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.322527 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnq9v\" (UniqueName: \"kubernetes.io/projected/709cb224-f4db-4abc-a2de-58b00284f74e-kube-api-access-jnq9v\") pod \"709cb224-f4db-4abc-a2de-58b00284f74e\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.322601 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-config\") pod \"709cb224-f4db-4abc-a2de-58b00284f74e\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.322707 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-ovsdbserver-nb\") pod \"709cb224-f4db-4abc-a2de-58b00284f74e\" (UID: \"709cb224-f4db-4abc-a2de-58b00284f74e\") " Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.328857 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709cb224-f4db-4abc-a2de-58b00284f74e-kube-api-access-jnq9v" (OuterVolumeSpecName: "kube-api-access-jnq9v") pod "709cb224-f4db-4abc-a2de-58b00284f74e" (UID: "709cb224-f4db-4abc-a2de-58b00284f74e"). InnerVolumeSpecName "kube-api-access-jnq9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.368181 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "709cb224-f4db-4abc-a2de-58b00284f74e" (UID: "709cb224-f4db-4abc-a2de-58b00284f74e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.370072 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-config" (OuterVolumeSpecName: "config") pod "709cb224-f4db-4abc-a2de-58b00284f74e" (UID: "709cb224-f4db-4abc-a2de-58b00284f74e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.392039 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "709cb224-f4db-4abc-a2de-58b00284f74e" (UID: "709cb224-f4db-4abc-a2de-58b00284f74e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.425373 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.425411 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnq9v\" (UniqueName: \"kubernetes.io/projected/709cb224-f4db-4abc-a2de-58b00284f74e-kube-api-access-jnq9v\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.425427 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.425439 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709cb224-f4db-4abc-a2de-58b00284f74e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.751727 4900 generic.go:334] "Generic (PLEG): container finished" podID="709cb224-f4db-4abc-a2de-58b00284f74e" containerID="8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e" exitCode=0 Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.751834 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8578f89889-bth4v" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.751823 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8578f89889-bth4v" event={"ID":"709cb224-f4db-4abc-a2de-58b00284f74e","Type":"ContainerDied","Data":"8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e"} Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.752520 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8578f89889-bth4v" event={"ID":"709cb224-f4db-4abc-a2de-58b00284f74e","Type":"ContainerDied","Data":"dd5b73ce4b431ea7eac30eb7a4acdb3d89f010f99b3db7b199a39a93fc157155"} Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.752566 4900 scope.go:117] "RemoveContainer" containerID="8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.754793 4900 generic.go:334] "Generic (PLEG): container finished" podID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerID="053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c" exitCode=0 Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.754846 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" event={"ID":"36b9b506-2cd8-4957-ae76-05e28ed9b74c","Type":"ContainerDied","Data":"053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c"} Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.869304 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8578f89889-bth4v"] Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.885292 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8578f89889-bth4v"] Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.890279 4900 scope.go:117] "RemoveContainer" containerID="554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.922384 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" path="/var/lib/kubelet/pods/709cb224-f4db-4abc-a2de-58b00284f74e/volumes" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.928383 4900 scope.go:117] "RemoveContainer" containerID="8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e" Dec 02 15:09:50 crc kubenswrapper[4900]: E1202 15:09:50.929071 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e\": container with ID starting with 8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e not found: ID does not exist" containerID="8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.929124 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e"} err="failed to get container status \"8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e\": rpc error: code = NotFound desc = could not find container \"8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e\": container with ID starting with 8b06a00fc583cf01fca587275c24f23e4c0f32780c25307ed073ffd84847b67e not found: ID does not exist" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.929168 4900 scope.go:117] "RemoveContainer" containerID="554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8" Dec 02 15:09:50 crc kubenswrapper[4900]: E1202 15:09:50.929717 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8\": container with ID starting with 554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8 not found: ID does not exist" containerID="554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8" Dec 02 15:09:50 crc kubenswrapper[4900]: I1202 15:09:50.929826 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8"} err="failed to get container status \"554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8\": rpc error: code = NotFound desc = could not find container \"554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8\": container with ID starting with 554c0a6bec1e041442d2f023ab2b551dce487a4f7c0c511830764256fd15bba8 not found: ID does not exist" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.349626 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Dec 02 15:09:51 crc kubenswrapper[4900]: E1202 15:09:51.350347 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" containerName="init" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.350377 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" containerName="init" Dec 02 15:09:51 crc kubenswrapper[4900]: E1202 15:09:51.350406 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" containerName="dnsmasq-dns" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.350422 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" containerName="dnsmasq-dns" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.350834 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="709cb224-f4db-4abc-a2de-58b00284f74e" containerName="dnsmasq-dns" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.352082 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.356344 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.357933 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.445040 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltxd\" (UniqueName: \"kubernetes.io/projected/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-kube-api-access-jltxd\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.445600 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.445879 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-354967a5-67ca-400e-8d2a-8cae430124e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.547865 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-354967a5-67ca-400e-8d2a-8cae430124e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.548031 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltxd\" (UniqueName: \"kubernetes.io/projected/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-kube-api-access-jltxd\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.548112 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.553138 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.553184 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-354967a5-67ca-400e-8d2a-8cae430124e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/33fb5c424af78a50015330fceec7e5a949aaae12ac6410e4016ffd38f877e96e/globalmount\"" pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.557105 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.575450 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltxd\" (UniqueName: \"kubernetes.io/projected/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-kube-api-access-jltxd\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.585902 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-354967a5-67ca-400e-8d2a-8cae430124e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") pod \"ovn-copy-data\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.689920 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.766111 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" event={"ID":"36b9b506-2cd8-4957-ae76-05e28ed9b74c","Type":"ContainerStarted","Data":"4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12"} Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.766212 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:51 crc kubenswrapper[4900]: I1202 15:09:51.793991 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" podStartSLOduration=3.793969445 podStartE2EDuration="3.793969445s" podCreationTimestamp="2025-12-02 15:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:51.786483005 +0000 UTC m=+5237.202296856" watchObservedRunningTime="2025-12-02 15:09:51.793969445 +0000 UTC m=+5237.209783286" Dec 02 15:09:52 crc kubenswrapper[4900]: I1202 15:09:52.038013 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 02 15:09:52 crc kubenswrapper[4900]: W1202 15:09:52.049971 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52a18c77_17d0_4f5d_b98d_b5c9947d0ed8.slice/crio-57cafdfd771f75697c82eb71ea72d1c5cfbe10da313477909be89533bd1ab685 WatchSource:0}: Error finding container 57cafdfd771f75697c82eb71ea72d1c5cfbe10da313477909be89533bd1ab685: Status 404 returned error can't find the container with id 57cafdfd771f75697c82eb71ea72d1c5cfbe10da313477909be89533bd1ab685 Dec 02 15:09:52 crc kubenswrapper[4900]: I1202 15:09:52.779182 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8","Type":"ContainerStarted","Data":"81f30ad27085d16c67c11e1814f8c7d6a68da7aff2b6ad120cf108a8b4157342"} Dec 02 15:09:52 crc kubenswrapper[4900]: I1202 15:09:52.779882 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8","Type":"ContainerStarted","Data":"57cafdfd771f75697c82eb71ea72d1c5cfbe10da313477909be89533bd1ab685"} Dec 02 15:09:52 crc kubenswrapper[4900]: I1202 15:09:52.803089 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.803067375 podStartE2EDuration="2.803067375s" podCreationTimestamp="2025-12-02 15:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:52.796978274 +0000 UTC m=+5238.212792205" watchObservedRunningTime="2025-12-02 15:09:52.803067375 +0000 UTC m=+5238.218881246" Dec 02 15:09:55 crc kubenswrapper[4900]: I1202 15:09:55.910144 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:09:55 crc kubenswrapper[4900]: E1202 15:09:55.910832 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.366700 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.369468 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.373866 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.373975 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.375692 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-j8s4d" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.384557 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.482799 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.482864 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-scripts\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.482904 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.483204 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-config\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.483281 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctgh\" (UniqueName: \"kubernetes.io/projected/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-kube-api-access-zctgh\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.585251 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.585324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-scripts\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.585363 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.585442 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-config\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.585468 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctgh\" (UniqueName: \"kubernetes.io/projected/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-kube-api-access-zctgh\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.586013 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.586571 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-scripts\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.586845 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-config\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.592564 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.612351 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctgh\" (UniqueName: \"kubernetes.io/projected/0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09-kube-api-access-zctgh\") pod \"ovn-northd-0\" (UID: \"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09\") " pod="openstack/ovn-northd-0" Dec 02 15:09:58 crc kubenswrapper[4900]: I1202 15:09:58.699244 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.144193 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.173947 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 15:09:59 crc kubenswrapper[4900]: W1202 15:09:59.181072 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0789e4f3_cc8f_4a96_b7a7_8adb1e4ffd09.slice/crio-4a647654b6622be9fa248ee5a12cea239f1ca7ebb97abd910411f43319cf6add WatchSource:0}: Error finding container 4a647654b6622be9fa248ee5a12cea239f1ca7ebb97abd910411f43319cf6add: Status 404 returned error can't find the container with id 4a647654b6622be9fa248ee5a12cea239f1ca7ebb97abd910411f43319cf6add Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.231872 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-whglv"] Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.232159 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="dnsmasq-dns" containerID="cri-o://dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3" gracePeriod=10 Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.396299 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.243:5353: connect: connection refused" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.701094 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.812559 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcqzt\" (UniqueName: \"kubernetes.io/projected/121dcfbe-214b-4bae-86d1-10d236d28c4a-kube-api-access-xcqzt\") pod \"121dcfbe-214b-4bae-86d1-10d236d28c4a\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.812636 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-config\") pod \"121dcfbe-214b-4bae-86d1-10d236d28c4a\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.812747 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-dns-svc\") pod \"121dcfbe-214b-4bae-86d1-10d236d28c4a\" (UID: \"121dcfbe-214b-4bae-86d1-10d236d28c4a\") " Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.819149 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121dcfbe-214b-4bae-86d1-10d236d28c4a-kube-api-access-xcqzt" (OuterVolumeSpecName: "kube-api-access-xcqzt") pod "121dcfbe-214b-4bae-86d1-10d236d28c4a" (UID: "121dcfbe-214b-4bae-86d1-10d236d28c4a"). InnerVolumeSpecName "kube-api-access-xcqzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.853958 4900 generic.go:334] "Generic (PLEG): container finished" podID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerID="dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3" exitCode=0 Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.854072 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.854748 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" event={"ID":"121dcfbe-214b-4bae-86d1-10d236d28c4a","Type":"ContainerDied","Data":"dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3"} Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.854784 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-whglv" event={"ID":"121dcfbe-214b-4bae-86d1-10d236d28c4a","Type":"ContainerDied","Data":"8e4d07c7013b6c1a6f142bf233fb2cc884ea12fd5ee1c69dc41be77b55cd481d"} Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.854802 4900 scope.go:117] "RemoveContainer" containerID="dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.858142 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09","Type":"ContainerStarted","Data":"b36af588343220fc9f3abc923c9090feffa753b894dcbf9998c2a5eacac71981"} Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.858196 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09","Type":"ContainerStarted","Data":"5b2698739e1411158bec022e6ba8d176383505fafb8d6187abc82a9e06a8fd77"} Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.858220 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09","Type":"ContainerStarted","Data":"4a647654b6622be9fa248ee5a12cea239f1ca7ebb97abd910411f43319cf6add"} Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.861371 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.866633 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-config" (OuterVolumeSpecName: "config") pod "121dcfbe-214b-4bae-86d1-10d236d28c4a" (UID: "121dcfbe-214b-4bae-86d1-10d236d28c4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.870174 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "121dcfbe-214b-4bae-86d1-10d236d28c4a" (UID: "121dcfbe-214b-4bae-86d1-10d236d28c4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.885375 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.885348062 podStartE2EDuration="1.885348062s" podCreationTimestamp="2025-12-02 15:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:09:59.884996532 +0000 UTC m=+5245.300810413" watchObservedRunningTime="2025-12-02 15:09:59.885348062 +0000 UTC m=+5245.301161933" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.905943 4900 scope.go:117] "RemoveContainer" containerID="f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.914260 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.914294 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcqzt\" (UniqueName: \"kubernetes.io/projected/121dcfbe-214b-4bae-86d1-10d236d28c4a-kube-api-access-xcqzt\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.914307 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121dcfbe-214b-4bae-86d1-10d236d28c4a-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.929353 4900 scope.go:117] "RemoveContainer" containerID="dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3" Dec 02 15:09:59 crc kubenswrapper[4900]: E1202 15:09:59.932614 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3\": container with ID starting with dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3 not found: ID does not exist" containerID="dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.932684 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3"} err="failed to get container status \"dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3\": rpc error: code = NotFound desc = could not find container \"dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3\": container with ID starting with dcbae212a1701fb89632dce3be3bbf2f116c752b08db2acb66febadede0039b3 not found: ID does not exist" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.932713 4900 scope.go:117] "RemoveContainer" containerID="f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf" Dec 02 15:09:59 crc kubenswrapper[4900]: E1202 15:09:59.933169 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf\": container with ID starting with f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf not found: ID does not exist" containerID="f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf" Dec 02 15:09:59 crc kubenswrapper[4900]: I1202 15:09:59.933234 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf"} err="failed to get container status \"f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf\": rpc error: code = NotFound desc = could not find container \"f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf\": container with ID starting with f4734f1ac03c4764aab974cd521023a7338aa189459bf6c20ec607a47d95ebbf not found: ID does not exist" Dec 02 15:10:00 crc kubenswrapper[4900]: I1202 15:10:00.186710 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-whglv"] Dec 02 15:10:00 crc kubenswrapper[4900]: I1202 15:10:00.192689 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-whglv"] Dec 02 15:10:00 crc kubenswrapper[4900]: I1202 15:10:00.687797 4900 scope.go:117] "RemoveContainer" containerID="3a08432bdff7117e29874a0e922811c6b8d6431232df919945632f0348872cb7" Dec 02 15:10:00 crc kubenswrapper[4900]: I1202 15:10:00.929729 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" path="/var/lib/kubelet/pods/121dcfbe-214b-4bae-86d1-10d236d28c4a/volumes" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.696032 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rj9xn"] Dec 02 15:10:03 crc kubenswrapper[4900]: E1202 15:10:03.697017 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="dnsmasq-dns" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.697037 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="dnsmasq-dns" Dec 02 15:10:03 crc kubenswrapper[4900]: E1202 15:10:03.697052 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="init" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.697059 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="init" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.697272 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="121dcfbe-214b-4bae-86d1-10d236d28c4a" containerName="dnsmasq-dns" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.697894 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.709465 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-13d1-account-create-update-wj6rs"] Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.710419 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.713510 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.724792 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rj9xn"] Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.729058 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-13d1-account-create-update-wj6rs"] Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.784927 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c7fd27-2e6f-4c2c-a72a-06172797e12f-operator-scripts\") pod \"keystone-db-create-rj9xn\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.785001 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w22jf\" (UniqueName: \"kubernetes.io/projected/41c7fd27-2e6f-4c2c-a72a-06172797e12f-kube-api-access-w22jf\") pod \"keystone-db-create-rj9xn\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.785042 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ace928ef-1e0e-4667-b1e0-0050528071f2-operator-scripts\") pod \"keystone-13d1-account-create-update-wj6rs\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.785142 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwx4\" (UniqueName: \"kubernetes.io/projected/ace928ef-1e0e-4667-b1e0-0050528071f2-kube-api-access-czwx4\") pod \"keystone-13d1-account-create-update-wj6rs\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.886768 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c7fd27-2e6f-4c2c-a72a-06172797e12f-operator-scripts\") pod \"keystone-db-create-rj9xn\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.886825 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w22jf\" (UniqueName: \"kubernetes.io/projected/41c7fd27-2e6f-4c2c-a72a-06172797e12f-kube-api-access-w22jf\") pod \"keystone-db-create-rj9xn\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.886869 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ace928ef-1e0e-4667-b1e0-0050528071f2-operator-scripts\") pod \"keystone-13d1-account-create-update-wj6rs\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.886902 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwx4\" (UniqueName: \"kubernetes.io/projected/ace928ef-1e0e-4667-b1e0-0050528071f2-kube-api-access-czwx4\") pod \"keystone-13d1-account-create-update-wj6rs\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.887682 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c7fd27-2e6f-4c2c-a72a-06172797e12f-operator-scripts\") pod \"keystone-db-create-rj9xn\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.888517 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ace928ef-1e0e-4667-b1e0-0050528071f2-operator-scripts\") pod \"keystone-13d1-account-create-update-wj6rs\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.908530 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w22jf\" (UniqueName: \"kubernetes.io/projected/41c7fd27-2e6f-4c2c-a72a-06172797e12f-kube-api-access-w22jf\") pod \"keystone-db-create-rj9xn\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:03 crc kubenswrapper[4900]: I1202 15:10:03.908942 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwx4\" (UniqueName: \"kubernetes.io/projected/ace928ef-1e0e-4667-b1e0-0050528071f2-kube-api-access-czwx4\") pod \"keystone-13d1-account-create-update-wj6rs\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.033454 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.048008 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.517585 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rj9xn"] Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.581995 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-13d1-account-create-update-wj6rs"] Dec 02 15:10:04 crc kubenswrapper[4900]: W1202 15:10:04.586901 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podace928ef_1e0e_4667_b1e0_0050528071f2.slice/crio-c09ec0993eb35f6a333d834f97c0cfd836cfcb87d6838f644ed1c2d50aed8bb5 WatchSource:0}: Error finding container c09ec0993eb35f6a333d834f97c0cfd836cfcb87d6838f644ed1c2d50aed8bb5: Status 404 returned error can't find the container with id c09ec0993eb35f6a333d834f97c0cfd836cfcb87d6838f644ed1c2d50aed8bb5 Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.920560 4900 generic.go:334] "Generic (PLEG): container finished" podID="41c7fd27-2e6f-4c2c-a72a-06172797e12f" containerID="848fb28230e049f9c653cb59ccc032c4836d04ec6a23f9081408df5909b61936" exitCode=0 Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.959692 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rj9xn" event={"ID":"41c7fd27-2e6f-4c2c-a72a-06172797e12f","Type":"ContainerDied","Data":"848fb28230e049f9c653cb59ccc032c4836d04ec6a23f9081408df5909b61936"} Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.959744 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rj9xn" event={"ID":"41c7fd27-2e6f-4c2c-a72a-06172797e12f","Type":"ContainerStarted","Data":"3337130cdd92f9f919d039d046dbfe315d7c90da1bf4423781115dfae2d63067"} Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.959757 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13d1-account-create-update-wj6rs" event={"ID":"ace928ef-1e0e-4667-b1e0-0050528071f2","Type":"ContainerStarted","Data":"50fee37938d55f334708f478b50a61bad8b719fa4ee610ade06df784bd4b812f"} Dec 02 15:10:04 crc kubenswrapper[4900]: I1202 15:10:04.959771 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13d1-account-create-update-wj6rs" event={"ID":"ace928ef-1e0e-4667-b1e0-0050528071f2","Type":"ContainerStarted","Data":"c09ec0993eb35f6a333d834f97c0cfd836cfcb87d6838f644ed1c2d50aed8bb5"} Dec 02 15:10:05 crc kubenswrapper[4900]: I1202 15:10:05.933866 4900 generic.go:334] "Generic (PLEG): container finished" podID="ace928ef-1e0e-4667-b1e0-0050528071f2" containerID="50fee37938d55f334708f478b50a61bad8b719fa4ee610ade06df784bd4b812f" exitCode=0 Dec 02 15:10:05 crc kubenswrapper[4900]: I1202 15:10:05.933938 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13d1-account-create-update-wj6rs" event={"ID":"ace928ef-1e0e-4667-b1e0-0050528071f2","Type":"ContainerDied","Data":"50fee37938d55f334708f478b50a61bad8b719fa4ee610ade06df784bd4b812f"} Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.367432 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.378821 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.424788 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ace928ef-1e0e-4667-b1e0-0050528071f2-operator-scripts\") pod \"ace928ef-1e0e-4667-b1e0-0050528071f2\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.424884 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c7fd27-2e6f-4c2c-a72a-06172797e12f-operator-scripts\") pod \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.424944 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czwx4\" (UniqueName: \"kubernetes.io/projected/ace928ef-1e0e-4667-b1e0-0050528071f2-kube-api-access-czwx4\") pod \"ace928ef-1e0e-4667-b1e0-0050528071f2\" (UID: \"ace928ef-1e0e-4667-b1e0-0050528071f2\") " Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.425002 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w22jf\" (UniqueName: \"kubernetes.io/projected/41c7fd27-2e6f-4c2c-a72a-06172797e12f-kube-api-access-w22jf\") pod \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\" (UID: \"41c7fd27-2e6f-4c2c-a72a-06172797e12f\") " Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.425801 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace928ef-1e0e-4667-b1e0-0050528071f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ace928ef-1e0e-4667-b1e0-0050528071f2" (UID: "ace928ef-1e0e-4667-b1e0-0050528071f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.425903 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c7fd27-2e6f-4c2c-a72a-06172797e12f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41c7fd27-2e6f-4c2c-a72a-06172797e12f" (UID: "41c7fd27-2e6f-4c2c-a72a-06172797e12f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.432397 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace928ef-1e0e-4667-b1e0-0050528071f2-kube-api-access-czwx4" (OuterVolumeSpecName: "kube-api-access-czwx4") pod "ace928ef-1e0e-4667-b1e0-0050528071f2" (UID: "ace928ef-1e0e-4667-b1e0-0050528071f2"). InnerVolumeSpecName "kube-api-access-czwx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.435468 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c7fd27-2e6f-4c2c-a72a-06172797e12f-kube-api-access-w22jf" (OuterVolumeSpecName: "kube-api-access-w22jf") pod "41c7fd27-2e6f-4c2c-a72a-06172797e12f" (UID: "41c7fd27-2e6f-4c2c-a72a-06172797e12f"). InnerVolumeSpecName "kube-api-access-w22jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.526903 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czwx4\" (UniqueName: \"kubernetes.io/projected/ace928ef-1e0e-4667-b1e0-0050528071f2-kube-api-access-czwx4\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.526931 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w22jf\" (UniqueName: \"kubernetes.io/projected/41c7fd27-2e6f-4c2c-a72a-06172797e12f-kube-api-access-w22jf\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.526941 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ace928ef-1e0e-4667-b1e0-0050528071f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.526949 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c7fd27-2e6f-4c2c-a72a-06172797e12f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.910962 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:10:06 crc kubenswrapper[4900]: E1202 15:10:06.911437 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.950632 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-13d1-account-create-update-wj6rs" event={"ID":"ace928ef-1e0e-4667-b1e0-0050528071f2","Type":"ContainerDied","Data":"c09ec0993eb35f6a333d834f97c0cfd836cfcb87d6838f644ed1c2d50aed8bb5"} Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.950832 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09ec0993eb35f6a333d834f97c0cfd836cfcb87d6838f644ed1c2d50aed8bb5" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.951118 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-13d1-account-create-update-wj6rs" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.955606 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rj9xn" event={"ID":"41c7fd27-2e6f-4c2c-a72a-06172797e12f","Type":"ContainerDied","Data":"3337130cdd92f9f919d039d046dbfe315d7c90da1bf4423781115dfae2d63067"} Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.955734 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3337130cdd92f9f919d039d046dbfe315d7c90da1bf4423781115dfae2d63067" Dec 02 15:10:06 crc kubenswrapper[4900]: I1202 15:10:06.955746 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rj9xn" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.352024 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dwq6m"] Dec 02 15:10:09 crc kubenswrapper[4900]: E1202 15:10:09.353665 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace928ef-1e0e-4667-b1e0-0050528071f2" containerName="mariadb-account-create-update" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.353743 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace928ef-1e0e-4667-b1e0-0050528071f2" containerName="mariadb-account-create-update" Dec 02 15:10:09 crc kubenswrapper[4900]: E1202 15:10:09.353816 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c7fd27-2e6f-4c2c-a72a-06172797e12f" containerName="mariadb-database-create" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.353876 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c7fd27-2e6f-4c2c-a72a-06172797e12f" containerName="mariadb-database-create" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.354071 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace928ef-1e0e-4667-b1e0-0050528071f2" containerName="mariadb-account-create-update" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.354132 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c7fd27-2e6f-4c2c-a72a-06172797e12f" containerName="mariadb-database-create" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.354964 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.357439 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.357685 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.357907 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qntx7" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.357914 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.367133 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dwq6m"] Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.475162 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-combined-ca-bundle\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.475464 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-config-data\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.475528 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5ht\" (UniqueName: \"kubernetes.io/projected/3fde6f6d-851f-4370-a884-1f81c7ca4f15-kube-api-access-bz5ht\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.577176 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-combined-ca-bundle\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.577243 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-config-data\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.577289 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz5ht\" (UniqueName: \"kubernetes.io/projected/3fde6f6d-851f-4370-a884-1f81c7ca4f15-kube-api-access-bz5ht\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.583256 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-combined-ca-bundle\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.587209 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-config-data\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.592291 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz5ht\" (UniqueName: \"kubernetes.io/projected/3fde6f6d-851f-4370-a884-1f81c7ca4f15-kube-api-access-bz5ht\") pod \"keystone-db-sync-dwq6m\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:09 crc kubenswrapper[4900]: I1202 15:10:09.669436 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:10 crc kubenswrapper[4900]: I1202 15:10:10.090352 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dwq6m"] Dec 02 15:10:10 crc kubenswrapper[4900]: W1202 15:10:10.099836 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fde6f6d_851f_4370_a884_1f81c7ca4f15.slice/crio-f696a602e8f2860fda7df8424a0b9d237f9666ba3a562f75adf81a1e27b66d99 WatchSource:0}: Error finding container f696a602e8f2860fda7df8424a0b9d237f9666ba3a562f75adf81a1e27b66d99: Status 404 returned error can't find the container with id f696a602e8f2860fda7df8424a0b9d237f9666ba3a562f75adf81a1e27b66d99 Dec 02 15:10:11 crc kubenswrapper[4900]: I1202 15:10:11.015845 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dwq6m" event={"ID":"3fde6f6d-851f-4370-a884-1f81c7ca4f15","Type":"ContainerStarted","Data":"402524039ce6b81615570a0ea96746b00a6a992df9b9cbc4cdfce35c89a6baf1"} Dec 02 15:10:11 crc kubenswrapper[4900]: I1202 15:10:11.016178 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dwq6m" event={"ID":"3fde6f6d-851f-4370-a884-1f81c7ca4f15","Type":"ContainerStarted","Data":"f696a602e8f2860fda7df8424a0b9d237f9666ba3a562f75adf81a1e27b66d99"} Dec 02 15:10:11 crc kubenswrapper[4900]: I1202 15:10:11.037996 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dwq6m" podStartSLOduration=2.037975329 podStartE2EDuration="2.037975329s" podCreationTimestamp="2025-12-02 15:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:10:11.030385676 +0000 UTC m=+5256.446199547" watchObservedRunningTime="2025-12-02 15:10:11.037975329 +0000 UTC m=+5256.453789190" Dec 02 15:10:12 crc kubenswrapper[4900]: I1202 15:10:12.041446 4900 generic.go:334] "Generic (PLEG): container finished" podID="3fde6f6d-851f-4370-a884-1f81c7ca4f15" containerID="402524039ce6b81615570a0ea96746b00a6a992df9b9cbc4cdfce35c89a6baf1" exitCode=0 Dec 02 15:10:12 crc kubenswrapper[4900]: I1202 15:10:12.041768 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dwq6m" event={"ID":"3fde6f6d-851f-4370-a884-1f81c7ca4f15","Type":"ContainerDied","Data":"402524039ce6b81615570a0ea96746b00a6a992df9b9cbc4cdfce35c89a6baf1"} Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.446309 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.561840 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-config-data\") pod \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.562089 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-combined-ca-bundle\") pod \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.562140 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5ht\" (UniqueName: \"kubernetes.io/projected/3fde6f6d-851f-4370-a884-1f81c7ca4f15-kube-api-access-bz5ht\") pod \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\" (UID: \"3fde6f6d-851f-4370-a884-1f81c7ca4f15\") " Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.577856 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fde6f6d-851f-4370-a884-1f81c7ca4f15-kube-api-access-bz5ht" (OuterVolumeSpecName: "kube-api-access-bz5ht") pod "3fde6f6d-851f-4370-a884-1f81c7ca4f15" (UID: "3fde6f6d-851f-4370-a884-1f81c7ca4f15"). InnerVolumeSpecName "kube-api-access-bz5ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.588348 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fde6f6d-851f-4370-a884-1f81c7ca4f15" (UID: "3fde6f6d-851f-4370-a884-1f81c7ca4f15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.607041 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-config-data" (OuterVolumeSpecName: "config-data") pod "3fde6f6d-851f-4370-a884-1f81c7ca4f15" (UID: "3fde6f6d-851f-4370-a884-1f81c7ca4f15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.664163 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.664208 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz5ht\" (UniqueName: \"kubernetes.io/projected/3fde6f6d-851f-4370-a884-1f81c7ca4f15-kube-api-access-bz5ht\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.664230 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fde6f6d-851f-4370-a884-1f81c7ca4f15-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:13 crc kubenswrapper[4900]: I1202 15:10:13.792119 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.067361 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dwq6m" event={"ID":"3fde6f6d-851f-4370-a884-1f81c7ca4f15","Type":"ContainerDied","Data":"f696a602e8f2860fda7df8424a0b9d237f9666ba3a562f75adf81a1e27b66d99"} Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.067788 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f696a602e8f2860fda7df8424a0b9d237f9666ba3a562f75adf81a1e27b66d99" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.067413 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dwq6m" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.209962 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7f8f447f-gtg29"] Dec 02 15:10:14 crc kubenswrapper[4900]: E1202 15:10:14.210271 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fde6f6d-851f-4370-a884-1f81c7ca4f15" containerName="keystone-db-sync" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.210286 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fde6f6d-851f-4370-a884-1f81c7ca4f15" containerName="keystone-db-sync" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.210448 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fde6f6d-851f-4370-a884-1f81c7ca4f15" containerName="keystone-db-sync" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.211255 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.224896 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7f8f447f-gtg29"] Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.250864 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-48bxk"] Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.251856 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.256534 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-48bxk"] Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.258875 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.258905 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qntx7" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.259087 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.259242 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.260268 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379212 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-combined-ca-bundle\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379260 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-config\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379335 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379359 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcpb6\" (UniqueName: \"kubernetes.io/projected/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-kube-api-access-rcpb6\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379386 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-config-data\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379410 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vl2f\" (UniqueName: \"kubernetes.io/projected/efae07a4-b59b-4708-b2ee-1c7590129fbb-kube-api-access-5vl2f\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379430 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-dns-svc\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379564 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-fernet-keys\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379594 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379614 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-scripts\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.379634 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-credential-keys\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.480821 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vl2f\" (UniqueName: \"kubernetes.io/projected/efae07a4-b59b-4708-b2ee-1c7590129fbb-kube-api-access-5vl2f\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.480886 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-dns-svc\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.480936 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-fernet-keys\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.480982 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481007 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-scripts\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481032 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-credential-keys\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481111 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-combined-ca-bundle\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-config\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481201 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481234 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcpb6\" (UniqueName: \"kubernetes.io/projected/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-kube-api-access-rcpb6\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.481263 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-config-data\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.482524 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-dns-svc\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.482585 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.482770 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.482824 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-config\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.486096 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-scripts\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.487516 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-fernet-keys\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.490317 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-config-data\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.491938 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-credential-keys\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.492079 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-combined-ca-bundle\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.500506 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcpb6\" (UniqueName: \"kubernetes.io/projected/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-kube-api-access-rcpb6\") pod \"dnsmasq-dns-7f7f8f447f-gtg29\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.506378 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vl2f\" (UniqueName: \"kubernetes.io/projected/efae07a4-b59b-4708-b2ee-1c7590129fbb-kube-api-access-5vl2f\") pod \"keystone-bootstrap-48bxk\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.528207 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:14 crc kubenswrapper[4900]: I1202 15:10:14.566937 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:15 crc kubenswrapper[4900]: I1202 15:10:15.054703 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7f8f447f-gtg29"] Dec 02 15:10:15 crc kubenswrapper[4900]: I1202 15:10:15.153775 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-48bxk"] Dec 02 15:10:16 crc kubenswrapper[4900]: I1202 15:10:16.105481 4900 generic.go:334] "Generic (PLEG): container finished" podID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerID="e520f6044690fd3e9fdd2f5d2a2f3eb92f848a198c178676ed55e43e0b0bb24e" exitCode=0 Dec 02 15:10:16 crc kubenswrapper[4900]: I1202 15:10:16.105610 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" event={"ID":"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4","Type":"ContainerDied","Data":"e520f6044690fd3e9fdd2f5d2a2f3eb92f848a198c178676ed55e43e0b0bb24e"} Dec 02 15:10:16 crc kubenswrapper[4900]: I1202 15:10:16.106025 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" event={"ID":"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4","Type":"ContainerStarted","Data":"04cbf9f3ab7625c0e1c7244de4d7690ec975881e7d3bd865b9104cda5a9f7d34"} Dec 02 15:10:16 crc kubenswrapper[4900]: I1202 15:10:16.113151 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48bxk" event={"ID":"efae07a4-b59b-4708-b2ee-1c7590129fbb","Type":"ContainerStarted","Data":"b121a7162167d88698bfd3ef64bd7a928f439165fbd7e79dbb68c089b6d3e1ad"} Dec 02 15:10:16 crc kubenswrapper[4900]: I1202 15:10:16.113199 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48bxk" event={"ID":"efae07a4-b59b-4708-b2ee-1c7590129fbb","Type":"ContainerStarted","Data":"8127a0fb580179d16e3de11e6e625f848239729599d8cfbbcce8a7b536444ce2"} Dec 02 15:10:16 crc kubenswrapper[4900]: I1202 15:10:16.182164 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-48bxk" podStartSLOduration=2.182137744 podStartE2EDuration="2.182137744s" podCreationTimestamp="2025-12-02 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:10:16.176041883 +0000 UTC m=+5261.591855744" watchObservedRunningTime="2025-12-02 15:10:16.182137744 +0000 UTC m=+5261.597951605" Dec 02 15:10:17 crc kubenswrapper[4900]: I1202 15:10:17.124088 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" event={"ID":"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4","Type":"ContainerStarted","Data":"f83da3bfe3e054a647e28fe56c91482130ac54310196fa2fb2e1e07e5e3b0ef7"} Dec 02 15:10:17 crc kubenswrapper[4900]: I1202 15:10:17.155388 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" podStartSLOduration=3.1553568260000002 podStartE2EDuration="3.155356826s" podCreationTimestamp="2025-12-02 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:10:17.149037839 +0000 UTC m=+5262.564851700" watchObservedRunningTime="2025-12-02 15:10:17.155356826 +0000 UTC m=+5262.571170687" Dec 02 15:10:18 crc kubenswrapper[4900]: I1202 15:10:18.135809 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:19 crc kubenswrapper[4900]: I1202 15:10:19.146561 4900 generic.go:334] "Generic (PLEG): container finished" podID="efae07a4-b59b-4708-b2ee-1c7590129fbb" containerID="b121a7162167d88698bfd3ef64bd7a928f439165fbd7e79dbb68c089b6d3e1ad" exitCode=0 Dec 02 15:10:19 crc kubenswrapper[4900]: I1202 15:10:19.146681 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48bxk" event={"ID":"efae07a4-b59b-4708-b2ee-1c7590129fbb","Type":"ContainerDied","Data":"b121a7162167d88698bfd3ef64bd7a928f439165fbd7e79dbb68c089b6d3e1ad"} Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.533920 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.604218 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vl2f\" (UniqueName: \"kubernetes.io/projected/efae07a4-b59b-4708-b2ee-1c7590129fbb-kube-api-access-5vl2f\") pod \"efae07a4-b59b-4708-b2ee-1c7590129fbb\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.604272 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-combined-ca-bundle\") pod \"efae07a4-b59b-4708-b2ee-1c7590129fbb\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.604311 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-credential-keys\") pod \"efae07a4-b59b-4708-b2ee-1c7590129fbb\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.604347 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-fernet-keys\") pod \"efae07a4-b59b-4708-b2ee-1c7590129fbb\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.604439 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-config-data\") pod \"efae07a4-b59b-4708-b2ee-1c7590129fbb\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.604532 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-scripts\") pod \"efae07a4-b59b-4708-b2ee-1c7590129fbb\" (UID: \"efae07a4-b59b-4708-b2ee-1c7590129fbb\") " Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.611462 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-scripts" (OuterVolumeSpecName: "scripts") pod "efae07a4-b59b-4708-b2ee-1c7590129fbb" (UID: "efae07a4-b59b-4708-b2ee-1c7590129fbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.612954 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "efae07a4-b59b-4708-b2ee-1c7590129fbb" (UID: "efae07a4-b59b-4708-b2ee-1c7590129fbb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.613162 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efae07a4-b59b-4708-b2ee-1c7590129fbb-kube-api-access-5vl2f" (OuterVolumeSpecName: "kube-api-access-5vl2f") pod "efae07a4-b59b-4708-b2ee-1c7590129fbb" (UID: "efae07a4-b59b-4708-b2ee-1c7590129fbb"). InnerVolumeSpecName "kube-api-access-5vl2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.613180 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "efae07a4-b59b-4708-b2ee-1c7590129fbb" (UID: "efae07a4-b59b-4708-b2ee-1c7590129fbb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.632520 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-config-data" (OuterVolumeSpecName: "config-data") pod "efae07a4-b59b-4708-b2ee-1c7590129fbb" (UID: "efae07a4-b59b-4708-b2ee-1c7590129fbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.632570 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efae07a4-b59b-4708-b2ee-1c7590129fbb" (UID: "efae07a4-b59b-4708-b2ee-1c7590129fbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.706894 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vl2f\" (UniqueName: \"kubernetes.io/projected/efae07a4-b59b-4708-b2ee-1c7590129fbb-kube-api-access-5vl2f\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.706934 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.706946 4900 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.706958 4900 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.706970 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.706980 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efae07a4-b59b-4708-b2ee-1c7590129fbb-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:20 crc kubenswrapper[4900]: I1202 15:10:20.909942 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:10:20 crc kubenswrapper[4900]: E1202 15:10:20.910299 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.168415 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48bxk" event={"ID":"efae07a4-b59b-4708-b2ee-1c7590129fbb","Type":"ContainerDied","Data":"8127a0fb580179d16e3de11e6e625f848239729599d8cfbbcce8a7b536444ce2"} Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.168457 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8127a0fb580179d16e3de11e6e625f848239729599d8cfbbcce8a7b536444ce2" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.168533 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48bxk" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.253214 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-48bxk"] Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.263559 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-48bxk"] Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.355043 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6fxsq"] Dec 02 15:10:21 crc kubenswrapper[4900]: E1202 15:10:21.355361 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efae07a4-b59b-4708-b2ee-1c7590129fbb" containerName="keystone-bootstrap" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.355378 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="efae07a4-b59b-4708-b2ee-1c7590129fbb" containerName="keystone-bootstrap" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.355542 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="efae07a4-b59b-4708-b2ee-1c7590129fbb" containerName="keystone-bootstrap" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.356055 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.358233 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qntx7" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.358298 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.359079 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.359288 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.362993 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.374087 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6fxsq"] Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.421999 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-credential-keys\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.422067 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-scripts\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.422110 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-combined-ca-bundle\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.422136 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5tf\" (UniqueName: \"kubernetes.io/projected/226107e9-7c04-4d39-b2c2-78e6e5cfd695-kube-api-access-fc5tf\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.422217 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-fernet-keys\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.422241 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-config-data\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.523866 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-fernet-keys\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.523915 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-config-data\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.523999 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-credential-keys\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.524033 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-scripts\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.524077 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-combined-ca-bundle\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.524107 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5tf\" (UniqueName: \"kubernetes.io/projected/226107e9-7c04-4d39-b2c2-78e6e5cfd695-kube-api-access-fc5tf\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.528334 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-scripts\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.528355 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-credential-keys\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.528554 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-config-data\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.529275 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-fernet-keys\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.531123 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-combined-ca-bundle\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.545385 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5tf\" (UniqueName: \"kubernetes.io/projected/226107e9-7c04-4d39-b2c2-78e6e5cfd695-kube-api-access-fc5tf\") pod \"keystone-bootstrap-6fxsq\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.689975 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:21 crc kubenswrapper[4900]: I1202 15:10:21.953744 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6fxsq"] Dec 02 15:10:22 crc kubenswrapper[4900]: I1202 15:10:22.178506 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fxsq" event={"ID":"226107e9-7c04-4d39-b2c2-78e6e5cfd695","Type":"ContainerStarted","Data":"f364c8e3ce4a6c44648db53e260c9705179a3092500aaf16a9589ac6a262ae83"} Dec 02 15:10:22 crc kubenswrapper[4900]: I1202 15:10:22.178843 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fxsq" event={"ID":"226107e9-7c04-4d39-b2c2-78e6e5cfd695","Type":"ContainerStarted","Data":"7da79e52ca6d57a94b2013c8a7072f744ad58b6e42ecdc5b731777d3285d6b46"} Dec 02 15:10:22 crc kubenswrapper[4900]: I1202 15:10:22.203388 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6fxsq" podStartSLOduration=1.203363133 podStartE2EDuration="1.203363133s" podCreationTimestamp="2025-12-02 15:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:10:22.197719105 +0000 UTC m=+5267.613532956" watchObservedRunningTime="2025-12-02 15:10:22.203363133 +0000 UTC m=+5267.619176984" Dec 02 15:10:22 crc kubenswrapper[4900]: I1202 15:10:22.932035 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efae07a4-b59b-4708-b2ee-1c7590129fbb" path="/var/lib/kubelet/pods/efae07a4-b59b-4708-b2ee-1c7590129fbb/volumes" Dec 02 15:10:24 crc kubenswrapper[4900]: I1202 15:10:24.531261 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:10:24 crc kubenswrapper[4900]: I1202 15:10:24.601784 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ddc6654df-dr622"] Dec 02 15:10:24 crc kubenswrapper[4900]: I1202 15:10:24.602006 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerName="dnsmasq-dns" containerID="cri-o://4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12" gracePeriod=10 Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.075939 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.188587 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-config\") pod \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.188751 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsmpp\" (UniqueName: \"kubernetes.io/projected/36b9b506-2cd8-4957-ae76-05e28ed9b74c-kube-api-access-nsmpp\") pod \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.188816 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-dns-svc\") pod \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.188855 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-nb\") pod \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.188944 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-sb\") pod \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\" (UID: \"36b9b506-2cd8-4957-ae76-05e28ed9b74c\") " Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.193587 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b9b506-2cd8-4957-ae76-05e28ed9b74c-kube-api-access-nsmpp" (OuterVolumeSpecName: "kube-api-access-nsmpp") pod "36b9b506-2cd8-4957-ae76-05e28ed9b74c" (UID: "36b9b506-2cd8-4957-ae76-05e28ed9b74c"). InnerVolumeSpecName "kube-api-access-nsmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.213227 4900 generic.go:334] "Generic (PLEG): container finished" podID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerID="4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12" exitCode=0 Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.213355 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.213544 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" event={"ID":"36b9b506-2cd8-4957-ae76-05e28ed9b74c","Type":"ContainerDied","Data":"4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12"} Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.213632 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddc6654df-dr622" event={"ID":"36b9b506-2cd8-4957-ae76-05e28ed9b74c","Type":"ContainerDied","Data":"81eda33fdba050df1bcf77ed5a66d5881860cfd0c1bf3b7f21b158109f0da54c"} Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.213729 4900 scope.go:117] "RemoveContainer" containerID="4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.215773 4900 generic.go:334] "Generic (PLEG): container finished" podID="226107e9-7c04-4d39-b2c2-78e6e5cfd695" containerID="f364c8e3ce4a6c44648db53e260c9705179a3092500aaf16a9589ac6a262ae83" exitCode=0 Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.215876 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fxsq" event={"ID":"226107e9-7c04-4d39-b2c2-78e6e5cfd695","Type":"ContainerDied","Data":"f364c8e3ce4a6c44648db53e260c9705179a3092500aaf16a9589ac6a262ae83"} Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.232081 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-config" (OuterVolumeSpecName: "config") pod "36b9b506-2cd8-4957-ae76-05e28ed9b74c" (UID: "36b9b506-2cd8-4957-ae76-05e28ed9b74c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.232451 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36b9b506-2cd8-4957-ae76-05e28ed9b74c" (UID: "36b9b506-2cd8-4957-ae76-05e28ed9b74c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.252428 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36b9b506-2cd8-4957-ae76-05e28ed9b74c" (UID: "36b9b506-2cd8-4957-ae76-05e28ed9b74c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.257546 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36b9b506-2cd8-4957-ae76-05e28ed9b74c" (UID: "36b9b506-2cd8-4957-ae76-05e28ed9b74c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.290554 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.290607 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.290619 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsmpp\" (UniqueName: \"kubernetes.io/projected/36b9b506-2cd8-4957-ae76-05e28ed9b74c-kube-api-access-nsmpp\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.290628 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.290653 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b9b506-2cd8-4957-ae76-05e28ed9b74c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.294680 4900 scope.go:117] "RemoveContainer" containerID="053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.316959 4900 scope.go:117] "RemoveContainer" containerID="4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12" Dec 02 15:10:25 crc kubenswrapper[4900]: E1202 15:10:25.317326 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12\": container with ID starting with 4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12 not found: ID does not exist" containerID="4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.317356 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12"} err="failed to get container status \"4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12\": rpc error: code = NotFound desc = could not find container \"4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12\": container with ID starting with 4311fee32d96736c69e929f3167c5839f39913f202f75db9536d27595b62ba12 not found: ID does not exist" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.317375 4900 scope.go:117] "RemoveContainer" containerID="053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c" Dec 02 15:10:25 crc kubenswrapper[4900]: E1202 15:10:25.317738 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c\": container with ID starting with 053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c not found: ID does not exist" containerID="053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.317788 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c"} err="failed to get container status \"053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c\": rpc error: code = NotFound desc = could not find container \"053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c\": container with ID starting with 053f5789e5fc4634db6e8c19bed0cfcfea6a6b13997c70b282ddaa9f482be02c not found: ID does not exist" Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.559763 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ddc6654df-dr622"] Dec 02 15:10:25 crc kubenswrapper[4900]: I1202 15:10:25.576565 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ddc6654df-dr622"] Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.668367 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.719467 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-fernet-keys\") pod \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.719815 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-scripts\") pod \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.719872 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-combined-ca-bundle\") pod \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.719980 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-config-data\") pod \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.720008 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5tf\" (UniqueName: \"kubernetes.io/projected/226107e9-7c04-4d39-b2c2-78e6e5cfd695-kube-api-access-fc5tf\") pod \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.720097 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-credential-keys\") pod \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\" (UID: \"226107e9-7c04-4d39-b2c2-78e6e5cfd695\") " Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.725156 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226107e9-7c04-4d39-b2c2-78e6e5cfd695-kube-api-access-fc5tf" (OuterVolumeSpecName: "kube-api-access-fc5tf") pod "226107e9-7c04-4d39-b2c2-78e6e5cfd695" (UID: "226107e9-7c04-4d39-b2c2-78e6e5cfd695"). InnerVolumeSpecName "kube-api-access-fc5tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.726266 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-scripts" (OuterVolumeSpecName: "scripts") pod "226107e9-7c04-4d39-b2c2-78e6e5cfd695" (UID: "226107e9-7c04-4d39-b2c2-78e6e5cfd695"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.726443 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "226107e9-7c04-4d39-b2c2-78e6e5cfd695" (UID: "226107e9-7c04-4d39-b2c2-78e6e5cfd695"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.729714 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "226107e9-7c04-4d39-b2c2-78e6e5cfd695" (UID: "226107e9-7c04-4d39-b2c2-78e6e5cfd695"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.749334 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "226107e9-7c04-4d39-b2c2-78e6e5cfd695" (UID: "226107e9-7c04-4d39-b2c2-78e6e5cfd695"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.750279 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-config-data" (OuterVolumeSpecName: "config-data") pod "226107e9-7c04-4d39-b2c2-78e6e5cfd695" (UID: "226107e9-7c04-4d39-b2c2-78e6e5cfd695"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.821779 4900 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.822050 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.822165 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.822276 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.822383 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5tf\" (UniqueName: \"kubernetes.io/projected/226107e9-7c04-4d39-b2c2-78e6e5cfd695-kube-api-access-fc5tf\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.822489 4900 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226107e9-7c04-4d39-b2c2-78e6e5cfd695-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:26 crc kubenswrapper[4900]: I1202 15:10:26.924752 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" path="/var/lib/kubelet/pods/36b9b506-2cd8-4957-ae76-05e28ed9b74c/volumes" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.249166 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fxsq" event={"ID":"226107e9-7c04-4d39-b2c2-78e6e5cfd695","Type":"ContainerDied","Data":"7da79e52ca6d57a94b2013c8a7072f744ad58b6e42ecdc5b731777d3285d6b46"} Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.249209 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da79e52ca6d57a94b2013c8a7072f744ad58b6e42ecdc5b731777d3285d6b46" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.249709 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fxsq" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.341465 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7944bbc6f9-jf4lg"] Dec 02 15:10:27 crc kubenswrapper[4900]: E1202 15:10:27.341857 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226107e9-7c04-4d39-b2c2-78e6e5cfd695" containerName="keystone-bootstrap" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.341881 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="226107e9-7c04-4d39-b2c2-78e6e5cfd695" containerName="keystone-bootstrap" Dec 02 15:10:27 crc kubenswrapper[4900]: E1202 15:10:27.341909 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerName="init" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.341918 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerName="init" Dec 02 15:10:27 crc kubenswrapper[4900]: E1202 15:10:27.341939 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerName="dnsmasq-dns" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.341948 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerName="dnsmasq-dns" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.342113 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b9b506-2cd8-4957-ae76-05e28ed9b74c" containerName="dnsmasq-dns" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.342130 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="226107e9-7c04-4d39-b2c2-78e6e5cfd695" containerName="keystone-bootstrap" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.342768 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.345389 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.345528 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.345766 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.352336 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qntx7" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.364600 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7944bbc6f9-jf4lg"] Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.433592 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-credential-keys\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.433698 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-fernet-keys\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.433733 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-scripts\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.433839 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-config-data\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.433868 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdhr\" (UniqueName: \"kubernetes.io/projected/f1416913-b691-42de-b4ff-b6266c6436d3-kube-api-access-drdhr\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.433933 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-combined-ca-bundle\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.535719 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-credential-keys\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.535772 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-fernet-keys\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.535791 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-scripts\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.535850 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-config-data\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.535869 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdhr\" (UniqueName: \"kubernetes.io/projected/f1416913-b691-42de-b4ff-b6266c6436d3-kube-api-access-drdhr\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.535902 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-combined-ca-bundle\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.550560 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-combined-ca-bundle\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.551152 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-scripts\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.570091 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-credential-keys\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.588254 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-config-data\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.592297 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdhr\" (UniqueName: \"kubernetes.io/projected/f1416913-b691-42de-b4ff-b6266c6436d3-kube-api-access-drdhr\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.593278 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1416913-b691-42de-b4ff-b6266c6436d3-fernet-keys\") pod \"keystone-7944bbc6f9-jf4lg\" (UID: \"f1416913-b691-42de-b4ff-b6266c6436d3\") " pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:27 crc kubenswrapper[4900]: I1202 15:10:27.668819 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:28 crc kubenswrapper[4900]: I1202 15:10:28.101895 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7944bbc6f9-jf4lg"] Dec 02 15:10:28 crc kubenswrapper[4900]: W1202 15:10:28.107206 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1416913_b691_42de_b4ff_b6266c6436d3.slice/crio-aef89755da6a2c24d0324a368eb0fc343576e39b7baa9c9aa2fda7b07446cb35 WatchSource:0}: Error finding container aef89755da6a2c24d0324a368eb0fc343576e39b7baa9c9aa2fda7b07446cb35: Status 404 returned error can't find the container with id aef89755da6a2c24d0324a368eb0fc343576e39b7baa9c9aa2fda7b07446cb35 Dec 02 15:10:28 crc kubenswrapper[4900]: I1202 15:10:28.259371 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7944bbc6f9-jf4lg" event={"ID":"f1416913-b691-42de-b4ff-b6266c6436d3","Type":"ContainerStarted","Data":"aef89755da6a2c24d0324a368eb0fc343576e39b7baa9c9aa2fda7b07446cb35"} Dec 02 15:10:29 crc kubenswrapper[4900]: I1202 15:10:29.273368 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7944bbc6f9-jf4lg" event={"ID":"f1416913-b691-42de-b4ff-b6266c6436d3","Type":"ContainerStarted","Data":"afcbd631ba398971fc83d08424a6831506eaadfd17d598ae7973117696e18358"} Dec 02 15:10:29 crc kubenswrapper[4900]: I1202 15:10:29.274873 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:10:29 crc kubenswrapper[4900]: I1202 15:10:29.309828 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7944bbc6f9-jf4lg" podStartSLOduration=2.309805308 podStartE2EDuration="2.309805308s" podCreationTimestamp="2025-12-02 15:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:10:29.304129269 +0000 UTC m=+5274.719943130" watchObservedRunningTime="2025-12-02 15:10:29.309805308 +0000 UTC m=+5274.725619189" Dec 02 15:10:34 crc kubenswrapper[4900]: I1202 15:10:34.920204 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:10:34 crc kubenswrapper[4900]: E1202 15:10:34.921722 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:10:46 crc kubenswrapper[4900]: I1202 15:10:46.910610 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:10:46 crc kubenswrapper[4900]: E1202 15:10:46.911809 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:10:57 crc kubenswrapper[4900]: I1202 15:10:57.911211 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:10:57 crc kubenswrapper[4900]: E1202 15:10:57.912305 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:10:59 crc kubenswrapper[4900]: I1202 15:10:59.085912 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7944bbc6f9-jf4lg" Dec 02 15:11:00 crc kubenswrapper[4900]: I1202 15:11:00.811543 4900 scope.go:117] "RemoveContainer" containerID="2316b256ea2b503f42a7d25505807b810d7e81c61d718091586106417c85b48c" Dec 02 15:11:00 crc kubenswrapper[4900]: I1202 15:11:00.852120 4900 scope.go:117] "RemoveContainer" containerID="537e8e418df125091c3c42753b8781f4e7a707c9d65f8abeb1a8fecdefe40593" Dec 02 15:11:00 crc kubenswrapper[4900]: I1202 15:11:00.901130 4900 scope.go:117] "RemoveContainer" containerID="1a7cd29a36747baa0b5332263ec93604ab69ede92c369fb077833d62b08fc2ed" Dec 02 15:11:00 crc kubenswrapper[4900]: I1202 15:11:00.942227 4900 scope.go:117] "RemoveContainer" containerID="9c5a9818de0bba4699fe2598e6338a3b8b04e81d8dbc03a338003f377349e3a2" Dec 02 15:11:00 crc kubenswrapper[4900]: I1202 15:11:00.965238 4900 scope.go:117] "RemoveContainer" containerID="fe7bdd2b76cc289dcb63dfb4b76177e0c6c4ae098f05034bba5ea3528063bf28" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.605440 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.607736 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.610043 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.610435 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bpdkw" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.612614 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.632512 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.646782 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.650022 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:03 crc kubenswrapper[4900]: E1202 15:11:03.652134 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-pfrsp openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="babba2b9-b577-4cf6-bc51-852624c87881" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.671064 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.673823 4900 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="babba2b9-b577-4cf6-bc51-852624c87881" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.684530 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.689576 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.690666 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.700174 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.709946 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/babba2b9-b577-4cf6-bc51-852624c87881-openstack-config\") pod \"openstackclient\" (UID: \"babba2b9-b577-4cf6-bc51-852624c87881\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.710095 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrsp\" (UniqueName: \"kubernetes.io/projected/babba2b9-b577-4cf6-bc51-852624c87881-kube-api-access-pfrsp\") pod \"openstackclient\" (UID: \"babba2b9-b577-4cf6-bc51-852624c87881\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.710177 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/babba2b9-b577-4cf6-bc51-852624c87881-openstack-config-secret\") pod \"openstackclient\" (UID: \"babba2b9-b577-4cf6-bc51-852624c87881\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.811224 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnssc\" (UniqueName: \"kubernetes.io/projected/beee5dba-052e-4430-9e85-79b478eabf6d-kube-api-access-rnssc\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.811273 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.811291 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.811489 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/babba2b9-b577-4cf6-bc51-852624c87881-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.811506 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/babba2b9-b577-4cf6-bc51-852624c87881-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.811514 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfrsp\" (UniqueName: \"kubernetes.io/projected/babba2b9-b577-4cf6-bc51-852624c87881-kube-api-access-pfrsp\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.912520 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnssc\" (UniqueName: \"kubernetes.io/projected/beee5dba-052e-4430-9e85-79b478eabf6d-kube-api-access-rnssc\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.912932 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.912964 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.914763 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.918044 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:03 crc kubenswrapper[4900]: I1202 15:11:03.947235 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnssc\" (UniqueName: \"kubernetes.io/projected/beee5dba-052e-4430-9e85-79b478eabf6d-kube-api-access-rnssc\") pod \"openstackclient\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " pod="openstack/openstackclient" Dec 02 15:11:04 crc kubenswrapper[4900]: I1202 15:11:04.010059 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:11:04 crc kubenswrapper[4900]: I1202 15:11:04.518049 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 15:11:04 crc kubenswrapper[4900]: I1202 15:11:04.684259 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:11:04 crc kubenswrapper[4900]: I1202 15:11:04.684327 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"beee5dba-052e-4430-9e85-79b478eabf6d","Type":"ContainerStarted","Data":"4cd4b4a4c243df9b9c9ab669aee7d373f16c65b64f76fae040052f2b36cfe2e4"} Dec 02 15:11:04 crc kubenswrapper[4900]: I1202 15:11:04.689001 4900 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="babba2b9-b577-4cf6-bc51-852624c87881" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" Dec 02 15:11:04 crc kubenswrapper[4900]: I1202 15:11:04.925925 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="babba2b9-b577-4cf6-bc51-852624c87881" path="/var/lib/kubelet/pods/babba2b9-b577-4cf6-bc51-852624c87881/volumes" Dec 02 15:11:05 crc kubenswrapper[4900]: I1202 15:11:05.694143 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"beee5dba-052e-4430-9e85-79b478eabf6d","Type":"ContainerStarted","Data":"3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a"} Dec 02 15:11:05 crc kubenswrapper[4900]: I1202 15:11:05.724022 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.723986843 podStartE2EDuration="2.723986843s" podCreationTimestamp="2025-12-02 15:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:11:05.714705722 +0000 UTC m=+5311.130519623" watchObservedRunningTime="2025-12-02 15:11:05.723986843 +0000 UTC m=+5311.139800734" Dec 02 15:11:10 crc kubenswrapper[4900]: I1202 15:11:10.910452 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:11:10 crc kubenswrapper[4900]: E1202 15:11:10.911053 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:11:21 crc kubenswrapper[4900]: I1202 15:11:21.910377 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:11:22 crc kubenswrapper[4900]: I1202 15:11:22.854300 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"3b76a522fc29ab4b883e8d52d8ae8d1cc61b9e17f09e1711cca595a73a978fea"} Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.736882 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m55mf"] Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.739742 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.771104 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m55mf"] Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.849259 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-catalog-content\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.849322 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-utilities\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.849359 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbbcc\" (UniqueName: \"kubernetes.io/projected/e63087c8-8cd4-43d9-801e-ceedc4c72e91-kube-api-access-kbbcc\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.958441 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-catalog-content\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.977873 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-utilities\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.977973 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbbcc\" (UniqueName: \"kubernetes.io/projected/e63087c8-8cd4-43d9-801e-ceedc4c72e91-kube-api-access-kbbcc\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.979154 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-catalog-content\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.979505 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-utilities\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.966697 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wgs7m"] Dec 02 15:11:34 crc kubenswrapper[4900]: I1202 15:11:34.982430 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.037431 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbbcc\" (UniqueName: \"kubernetes.io/projected/e63087c8-8cd4-43d9-801e-ceedc4c72e91-kube-api-access-kbbcc\") pod \"community-operators-m55mf\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.045746 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgs7m"] Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.077553 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.079320 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-catalog-content\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.079354 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9n8\" (UniqueName: \"kubernetes.io/projected/c3bb6398-2e70-4214-ba44-302a7f5b590d-kube-api-access-2r9n8\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.079432 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-utilities\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.186740 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-catalog-content\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.187023 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r9n8\" (UniqueName: \"kubernetes.io/projected/c3bb6398-2e70-4214-ba44-302a7f5b590d-kube-api-access-2r9n8\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.187096 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-utilities\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.187469 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-catalog-content\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.187548 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-utilities\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.203174 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r9n8\" (UniqueName: \"kubernetes.io/projected/c3bb6398-2e70-4214-ba44-302a7f5b590d-kube-api-access-2r9n8\") pod \"certified-operators-wgs7m\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.386587 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.555175 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m55mf"] Dec 02 15:11:35 crc kubenswrapper[4900]: I1202 15:11:35.840032 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgs7m"] Dec 02 15:11:35 crc kubenswrapper[4900]: W1202 15:11:35.850662 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3bb6398_2e70_4214_ba44_302a7f5b590d.slice/crio-106fe666370edb74b30a446d39658dd3126a791229b9ff9f5222be1013cb837a WatchSource:0}: Error finding container 106fe666370edb74b30a446d39658dd3126a791229b9ff9f5222be1013cb837a: Status 404 returned error can't find the container with id 106fe666370edb74b30a446d39658dd3126a791229b9ff9f5222be1013cb837a Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.372001 4900 generic.go:334] "Generic (PLEG): container finished" podID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerID="ac5e8280a6f5f1c9917eae447ac48b8466093b3ba12e0476ab004d608f2d66c8" exitCode=0 Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.372104 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgs7m" event={"ID":"c3bb6398-2e70-4214-ba44-302a7f5b590d","Type":"ContainerDied","Data":"ac5e8280a6f5f1c9917eae447ac48b8466093b3ba12e0476ab004d608f2d66c8"} Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.372307 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgs7m" event={"ID":"c3bb6398-2e70-4214-ba44-302a7f5b590d","Type":"ContainerStarted","Data":"106fe666370edb74b30a446d39658dd3126a791229b9ff9f5222be1013cb837a"} Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.376073 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.377021 4900 generic.go:334] "Generic (PLEG): container finished" podID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerID="7c973b0ce29fb79f3a890252dc9950737ffe7079ed0b155407d25ac38570f223" exitCode=0 Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.377050 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerDied","Data":"7c973b0ce29fb79f3a890252dc9950737ffe7079ed0b155407d25ac38570f223"} Dec 02 15:11:36 crc kubenswrapper[4900]: I1202 15:11:36.377068 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerStarted","Data":"2f7d1b18aa61d448392598971fa10c29969754d8a136e571cda72ed3572efdf4"} Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.387393 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerStarted","Data":"1436d82113517789e2393af3027805bbf70208b948198545bdc6c3d059ce2810"} Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.546726 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qz25"] Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.549010 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.559864 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qz25"] Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.626587 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-catalog-content\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.626916 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcwg\" (UniqueName: \"kubernetes.io/projected/1ba9315f-9053-428b-8411-7dfb64a581af-kube-api-access-8rcwg\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.626948 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-utilities\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.728194 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-catalog-content\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.728311 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rcwg\" (UniqueName: \"kubernetes.io/projected/1ba9315f-9053-428b-8411-7dfb64a581af-kube-api-access-8rcwg\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.728352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-utilities\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.729204 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-utilities\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.729237 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-catalog-content\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.756123 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rcwg\" (UniqueName: \"kubernetes.io/projected/1ba9315f-9053-428b-8411-7dfb64a581af-kube-api-access-8rcwg\") pod \"redhat-marketplace-6qz25\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:37 crc kubenswrapper[4900]: I1202 15:11:37.908265 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.212059 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qz25"] Dec 02 15:11:38 crc kubenswrapper[4900]: W1202 15:11:38.212921 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba9315f_9053_428b_8411_7dfb64a581af.slice/crio-096054a40b0da7421627edc9b6fce35a65d1acf5018a738fdd4b8eafc8547dd9 WatchSource:0}: Error finding container 096054a40b0da7421627edc9b6fce35a65d1acf5018a738fdd4b8eafc8547dd9: Status 404 returned error can't find the container with id 096054a40b0da7421627edc9b6fce35a65d1acf5018a738fdd4b8eafc8547dd9 Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.407317 4900 generic.go:334] "Generic (PLEG): container finished" podID="1ba9315f-9053-428b-8411-7dfb64a581af" containerID="aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d" exitCode=0 Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.407390 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qz25" event={"ID":"1ba9315f-9053-428b-8411-7dfb64a581af","Type":"ContainerDied","Data":"aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d"} Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.407417 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qz25" event={"ID":"1ba9315f-9053-428b-8411-7dfb64a581af","Type":"ContainerStarted","Data":"096054a40b0da7421627edc9b6fce35a65d1acf5018a738fdd4b8eafc8547dd9"} Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.409834 4900 generic.go:334] "Generic (PLEG): container finished" podID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerID="61087d2a5a4770a07f09e6b38f5ac487abf8e14b3d6f922a88e83b176bf844b9" exitCode=0 Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.409934 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgs7m" event={"ID":"c3bb6398-2e70-4214-ba44-302a7f5b590d","Type":"ContainerDied","Data":"61087d2a5a4770a07f09e6b38f5ac487abf8e14b3d6f922a88e83b176bf844b9"} Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.412679 4900 generic.go:334] "Generic (PLEG): container finished" podID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerID="1436d82113517789e2393af3027805bbf70208b948198545bdc6c3d059ce2810" exitCode=0 Dec 02 15:11:38 crc kubenswrapper[4900]: I1202 15:11:38.412721 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerDied","Data":"1436d82113517789e2393af3027805bbf70208b948198545bdc6c3d059ce2810"} Dec 02 15:11:39 crc kubenswrapper[4900]: I1202 15:11:39.431189 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerStarted","Data":"6841c2eb29f71a0168182bbb5c7287ba15e59f71f304ab056b9b289d914e3e52"} Dec 02 15:11:39 crc kubenswrapper[4900]: I1202 15:11:39.457365 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m55mf" podStartSLOduration=2.707612384 podStartE2EDuration="5.457342151s" podCreationTimestamp="2025-12-02 15:11:34 +0000 UTC" firstStartedPulling="2025-12-02 15:11:36.378379624 +0000 UTC m=+5341.794193475" lastFinishedPulling="2025-12-02 15:11:39.128109391 +0000 UTC m=+5344.543923242" observedRunningTime="2025-12-02 15:11:39.453774641 +0000 UTC m=+5344.869588502" watchObservedRunningTime="2025-12-02 15:11:39.457342151 +0000 UTC m=+5344.873156012" Dec 02 15:11:40 crc kubenswrapper[4900]: I1202 15:11:40.442636 4900 generic.go:334] "Generic (PLEG): container finished" podID="1ba9315f-9053-428b-8411-7dfb64a581af" containerID="c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0" exitCode=0 Dec 02 15:11:40 crc kubenswrapper[4900]: I1202 15:11:40.442737 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qz25" event={"ID":"1ba9315f-9053-428b-8411-7dfb64a581af","Type":"ContainerDied","Data":"c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0"} Dec 02 15:11:40 crc kubenswrapper[4900]: I1202 15:11:40.445544 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgs7m" event={"ID":"c3bb6398-2e70-4214-ba44-302a7f5b590d","Type":"ContainerStarted","Data":"d7e62ce1443c8f712c4efa77893937b29731f1c1739b88b6a0ee036bd50e10fe"} Dec 02 15:11:40 crc kubenswrapper[4900]: I1202 15:11:40.484044 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wgs7m" podStartSLOduration=3.625539645 podStartE2EDuration="6.484023514s" podCreationTimestamp="2025-12-02 15:11:34 +0000 UTC" firstStartedPulling="2025-12-02 15:11:36.375840673 +0000 UTC m=+5341.791654524" lastFinishedPulling="2025-12-02 15:11:39.234324542 +0000 UTC m=+5344.650138393" observedRunningTime="2025-12-02 15:11:40.483073627 +0000 UTC m=+5345.898887518" watchObservedRunningTime="2025-12-02 15:11:40.484023514 +0000 UTC m=+5345.899837365" Dec 02 15:11:41 crc kubenswrapper[4900]: I1202 15:11:41.456523 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qz25" event={"ID":"1ba9315f-9053-428b-8411-7dfb64a581af","Type":"ContainerStarted","Data":"8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580"} Dec 02 15:11:41 crc kubenswrapper[4900]: I1202 15:11:41.487972 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qz25" podStartSLOduration=1.9672422969999999 podStartE2EDuration="4.487948927s" podCreationTimestamp="2025-12-02 15:11:37 +0000 UTC" firstStartedPulling="2025-12-02 15:11:38.414050062 +0000 UTC m=+5343.829863923" lastFinishedPulling="2025-12-02 15:11:40.934756702 +0000 UTC m=+5346.350570553" observedRunningTime="2025-12-02 15:11:41.486503357 +0000 UTC m=+5346.902317228" watchObservedRunningTime="2025-12-02 15:11:41.487948927 +0000 UTC m=+5346.903762788" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.078172 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.078891 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.140006 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.387315 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.387394 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.463174 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.574990 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:45 crc kubenswrapper[4900]: I1202 15:11:45.578204 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:47 crc kubenswrapper[4900]: I1202 15:11:47.327962 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgs7m"] Dec 02 15:11:47 crc kubenswrapper[4900]: I1202 15:11:47.512597 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wgs7m" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="registry-server" containerID="cri-o://d7e62ce1443c8f712c4efa77893937b29731f1c1739b88b6a0ee036bd50e10fe" gracePeriod=2 Dec 02 15:11:47 crc kubenswrapper[4900]: I1202 15:11:47.908774 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:47 crc kubenswrapper[4900]: I1202 15:11:47.909040 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:47 crc kubenswrapper[4900]: I1202 15:11:47.964511 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:48 crc kubenswrapper[4900]: I1202 15:11:48.527859 4900 generic.go:334] "Generic (PLEG): container finished" podID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerID="d7e62ce1443c8f712c4efa77893937b29731f1c1739b88b6a0ee036bd50e10fe" exitCode=0 Dec 02 15:11:48 crc kubenswrapper[4900]: I1202 15:11:48.527981 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgs7m" event={"ID":"c3bb6398-2e70-4214-ba44-302a7f5b590d","Type":"ContainerDied","Data":"d7e62ce1443c8f712c4efa77893937b29731f1c1739b88b6a0ee036bd50e10fe"} Dec 02 15:11:48 crc kubenswrapper[4900]: I1202 15:11:48.612237 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.124888 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.226799 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-utilities\") pod \"c3bb6398-2e70-4214-ba44-302a7f5b590d\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.226880 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r9n8\" (UniqueName: \"kubernetes.io/projected/c3bb6398-2e70-4214-ba44-302a7f5b590d-kube-api-access-2r9n8\") pod \"c3bb6398-2e70-4214-ba44-302a7f5b590d\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.227088 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-catalog-content\") pod \"c3bb6398-2e70-4214-ba44-302a7f5b590d\" (UID: \"c3bb6398-2e70-4214-ba44-302a7f5b590d\") " Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.228120 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-utilities" (OuterVolumeSpecName: "utilities") pod "c3bb6398-2e70-4214-ba44-302a7f5b590d" (UID: "c3bb6398-2e70-4214-ba44-302a7f5b590d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.244019 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bb6398-2e70-4214-ba44-302a7f5b590d-kube-api-access-2r9n8" (OuterVolumeSpecName: "kube-api-access-2r9n8") pod "c3bb6398-2e70-4214-ba44-302a7f5b590d" (UID: "c3bb6398-2e70-4214-ba44-302a7f5b590d"). InnerVolumeSpecName "kube-api-access-2r9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.303367 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3bb6398-2e70-4214-ba44-302a7f5b590d" (UID: "c3bb6398-2e70-4214-ba44-302a7f5b590d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.329146 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.329181 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r9n8\" (UniqueName: \"kubernetes.io/projected/c3bb6398-2e70-4214-ba44-302a7f5b590d-kube-api-access-2r9n8\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.329236 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3bb6398-2e70-4214-ba44-302a7f5b590d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.540006 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgs7m" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.540006 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgs7m" event={"ID":"c3bb6398-2e70-4214-ba44-302a7f5b590d","Type":"ContainerDied","Data":"106fe666370edb74b30a446d39658dd3126a791229b9ff9f5222be1013cb837a"} Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.540089 4900 scope.go:117] "RemoveContainer" containerID="d7e62ce1443c8f712c4efa77893937b29731f1c1739b88b6a0ee036bd50e10fe" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.573263 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgs7m"] Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.578856 4900 scope.go:117] "RemoveContainer" containerID="61087d2a5a4770a07f09e6b38f5ac487abf8e14b3d6f922a88e83b176bf844b9" Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.580964 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wgs7m"] Dec 02 15:11:49 crc kubenswrapper[4900]: I1202 15:11:49.596530 4900 scope.go:117] "RemoveContainer" containerID="ac5e8280a6f5f1c9917eae447ac48b8466093b3ba12e0476ab004d608f2d66c8" Dec 02 15:11:50 crc kubenswrapper[4900]: I1202 15:11:50.924987 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" path="/var/lib/kubelet/pods/c3bb6398-2e70-4214-ba44-302a7f5b590d/volumes" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.128929 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m55mf"] Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.129216 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m55mf" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="registry-server" containerID="cri-o://6841c2eb29f71a0168182bbb5c7287ba15e59f71f304ab056b9b289d914e3e52" gracePeriod=2 Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.576966 4900 generic.go:334] "Generic (PLEG): container finished" podID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerID="6841c2eb29f71a0168182bbb5c7287ba15e59f71f304ab056b9b289d914e3e52" exitCode=0 Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.577061 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerDied","Data":"6841c2eb29f71a0168182bbb5c7287ba15e59f71f304ab056b9b289d914e3e52"} Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.577281 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m55mf" event={"ID":"e63087c8-8cd4-43d9-801e-ceedc4c72e91","Type":"ContainerDied","Data":"2f7d1b18aa61d448392598971fa10c29969754d8a136e571cda72ed3572efdf4"} Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.577301 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7d1b18aa61d448392598971fa10c29969754d8a136e571cda72ed3572efdf4" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.591046 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.697052 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbbcc\" (UniqueName: \"kubernetes.io/projected/e63087c8-8cd4-43d9-801e-ceedc4c72e91-kube-api-access-kbbcc\") pod \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.697184 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-utilities\") pod \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.697228 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-catalog-content\") pod \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\" (UID: \"e63087c8-8cd4-43d9-801e-ceedc4c72e91\") " Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.698487 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-utilities" (OuterVolumeSpecName: "utilities") pod "e63087c8-8cd4-43d9-801e-ceedc4c72e91" (UID: "e63087c8-8cd4-43d9-801e-ceedc4c72e91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.704923 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63087c8-8cd4-43d9-801e-ceedc4c72e91-kube-api-access-kbbcc" (OuterVolumeSpecName: "kube-api-access-kbbcc") pod "e63087c8-8cd4-43d9-801e-ceedc4c72e91" (UID: "e63087c8-8cd4-43d9-801e-ceedc4c72e91"). InnerVolumeSpecName "kube-api-access-kbbcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.761785 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e63087c8-8cd4-43d9-801e-ceedc4c72e91" (UID: "e63087c8-8cd4-43d9-801e-ceedc4c72e91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.799222 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.799255 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63087c8-8cd4-43d9-801e-ceedc4c72e91-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:52 crc kubenswrapper[4900]: I1202 15:11:52.799266 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbbcc\" (UniqueName: \"kubernetes.io/projected/e63087c8-8cd4-43d9-801e-ceedc4c72e91-kube-api-access-kbbcc\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:53 crc kubenswrapper[4900]: I1202 15:11:53.585684 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m55mf" Dec 02 15:11:53 crc kubenswrapper[4900]: I1202 15:11:53.618565 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m55mf"] Dec 02 15:11:53 crc kubenswrapper[4900]: I1202 15:11:53.627906 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m55mf"] Dec 02 15:11:54 crc kubenswrapper[4900]: I1202 15:11:54.931605 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" path="/var/lib/kubelet/pods/e63087c8-8cd4-43d9-801e-ceedc4c72e91/volumes" Dec 02 15:11:55 crc kubenswrapper[4900]: I1202 15:11:55.527796 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qz25"] Dec 02 15:11:55 crc kubenswrapper[4900]: I1202 15:11:55.528166 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qz25" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="registry-server" containerID="cri-o://8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580" gracePeriod=2 Dec 02 15:11:55 crc kubenswrapper[4900]: I1202 15:11:55.949077 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.063068 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-catalog-content\") pod \"1ba9315f-9053-428b-8411-7dfb64a581af\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.063147 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-utilities\") pod \"1ba9315f-9053-428b-8411-7dfb64a581af\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.063308 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rcwg\" (UniqueName: \"kubernetes.io/projected/1ba9315f-9053-428b-8411-7dfb64a581af-kube-api-access-8rcwg\") pod \"1ba9315f-9053-428b-8411-7dfb64a581af\" (UID: \"1ba9315f-9053-428b-8411-7dfb64a581af\") " Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.064421 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-utilities" (OuterVolumeSpecName: "utilities") pod "1ba9315f-9053-428b-8411-7dfb64a581af" (UID: "1ba9315f-9053-428b-8411-7dfb64a581af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.065037 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.069942 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba9315f-9053-428b-8411-7dfb64a581af-kube-api-access-8rcwg" (OuterVolumeSpecName: "kube-api-access-8rcwg") pod "1ba9315f-9053-428b-8411-7dfb64a581af" (UID: "1ba9315f-9053-428b-8411-7dfb64a581af"). InnerVolumeSpecName "kube-api-access-8rcwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.082753 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ba9315f-9053-428b-8411-7dfb64a581af" (UID: "1ba9315f-9053-428b-8411-7dfb64a581af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.166840 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba9315f-9053-428b-8411-7dfb64a581af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.166873 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rcwg\" (UniqueName: \"kubernetes.io/projected/1ba9315f-9053-428b-8411-7dfb64a581af-kube-api-access-8rcwg\") on node \"crc\" DevicePath \"\"" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.630602 4900 generic.go:334] "Generic (PLEG): container finished" podID="1ba9315f-9053-428b-8411-7dfb64a581af" containerID="8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580" exitCode=0 Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.630718 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qz25" event={"ID":"1ba9315f-9053-428b-8411-7dfb64a581af","Type":"ContainerDied","Data":"8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580"} Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.630760 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qz25" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.630784 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qz25" event={"ID":"1ba9315f-9053-428b-8411-7dfb64a581af","Type":"ContainerDied","Data":"096054a40b0da7421627edc9b6fce35a65d1acf5018a738fdd4b8eafc8547dd9"} Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.630810 4900 scope.go:117] "RemoveContainer" containerID="8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.660695 4900 scope.go:117] "RemoveContainer" containerID="c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.691361 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qz25"] Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.699892 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qz25"] Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.700151 4900 scope.go:117] "RemoveContainer" containerID="aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.731730 4900 scope.go:117] "RemoveContainer" containerID="8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580" Dec 02 15:11:56 crc kubenswrapper[4900]: E1202 15:11:56.732410 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580\": container with ID starting with 8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580 not found: ID does not exist" containerID="8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.732456 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580"} err="failed to get container status \"8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580\": rpc error: code = NotFound desc = could not find container \"8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580\": container with ID starting with 8a0536574d336134550428a054d6703a72a847ae96abd35b12efc043566cf580 not found: ID does not exist" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.732477 4900 scope.go:117] "RemoveContainer" containerID="c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0" Dec 02 15:11:56 crc kubenswrapper[4900]: E1202 15:11:56.732932 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0\": container with ID starting with c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0 not found: ID does not exist" containerID="c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.732971 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0"} err="failed to get container status \"c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0\": rpc error: code = NotFound desc = could not find container \"c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0\": container with ID starting with c4bf6b54887f8c31d5afd7ddb21aa528400c05c08278bcf73a83cce8f4475dc0 not found: ID does not exist" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.732986 4900 scope.go:117] "RemoveContainer" containerID="aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d" Dec 02 15:11:56 crc kubenswrapper[4900]: E1202 15:11:56.733415 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d\": container with ID starting with aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d not found: ID does not exist" containerID="aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.733463 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d"} err="failed to get container status \"aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d\": rpc error: code = NotFound desc = could not find container \"aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d\": container with ID starting with aaeddc575b238a7c570fb84af30c5ec364702715610065cd74ddd3afe99e544d not found: ID does not exist" Dec 02 15:11:56 crc kubenswrapper[4900]: I1202 15:11:56.921357 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" path="/var/lib/kubelet/pods/1ba9315f-9053-428b-8411-7dfb64a581af/volumes" Dec 02 15:12:25 crc kubenswrapper[4900]: E1202 15:12:25.123609 4900 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.130:37954->38.102.83.130:46203: write tcp 38.102.83.130:37954->38.102.83.130:46203: write: broken pipe Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.198685 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xdzd5"] Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199423 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199435 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199453 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199460 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199470 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="extract-utilities" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199478 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="extract-utilities" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199488 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="extract-utilities" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199494 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="extract-utilities" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199502 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199508 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199519 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="extract-content" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199525 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="extract-content" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199535 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="extract-content" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199541 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="extract-content" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199554 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="extract-content" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199560 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="extract-content" Dec 02 15:12:43 crc kubenswrapper[4900]: E1202 15:12:43.199572 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="extract-utilities" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199577 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="extract-utilities" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199761 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bb6398-2e70-4214-ba44-302a7f5b590d" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199785 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba9315f-9053-428b-8411-7dfb64a581af" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.199799 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63087c8-8cd4-43d9-801e-ceedc4c72e91" containerName="registry-server" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.200548 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.218184 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xdzd5"] Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.288377 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9402-account-create-update-rvfbn"] Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.289061 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmr2p\" (UniqueName: \"kubernetes.io/projected/b417b22a-6ddb-4537-a954-80ed6f20ab40-kube-api-access-qmr2p\") pod \"barbican-db-create-xdzd5\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.289147 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b417b22a-6ddb-4537-a954-80ed6f20ab40-operator-scripts\") pod \"barbican-db-create-xdzd5\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.289771 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.295845 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.301537 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9402-account-create-update-rvfbn"] Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.390343 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b417b22a-6ddb-4537-a954-80ed6f20ab40-operator-scripts\") pod \"barbican-db-create-xdzd5\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.390439 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrxq\" (UniqueName: \"kubernetes.io/projected/c4759518-d3b9-4007-be35-dc9b410f3a84-kube-api-access-tbrxq\") pod \"barbican-9402-account-create-update-rvfbn\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.390494 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4759518-d3b9-4007-be35-dc9b410f3a84-operator-scripts\") pod \"barbican-9402-account-create-update-rvfbn\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.390594 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmr2p\" (UniqueName: \"kubernetes.io/projected/b417b22a-6ddb-4537-a954-80ed6f20ab40-kube-api-access-qmr2p\") pod \"barbican-db-create-xdzd5\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.391462 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b417b22a-6ddb-4537-a954-80ed6f20ab40-operator-scripts\") pod \"barbican-db-create-xdzd5\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.415087 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmr2p\" (UniqueName: \"kubernetes.io/projected/b417b22a-6ddb-4537-a954-80ed6f20ab40-kube-api-access-qmr2p\") pod \"barbican-db-create-xdzd5\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.492010 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbrxq\" (UniqueName: \"kubernetes.io/projected/c4759518-d3b9-4007-be35-dc9b410f3a84-kube-api-access-tbrxq\") pod \"barbican-9402-account-create-update-rvfbn\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.492117 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4759518-d3b9-4007-be35-dc9b410f3a84-operator-scripts\") pod \"barbican-9402-account-create-update-rvfbn\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.492886 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4759518-d3b9-4007-be35-dc9b410f3a84-operator-scripts\") pod \"barbican-9402-account-create-update-rvfbn\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.507627 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbrxq\" (UniqueName: \"kubernetes.io/projected/c4759518-d3b9-4007-be35-dc9b410f3a84-kube-api-access-tbrxq\") pod \"barbican-9402-account-create-update-rvfbn\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.539773 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.609331 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:43 crc kubenswrapper[4900]: I1202 15:12:43.992484 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xdzd5"] Dec 02 15:12:44 crc kubenswrapper[4900]: W1202 15:12:44.073978 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4759518_d3b9_4007_be35_dc9b410f3a84.slice/crio-41403a99098824f0aceccb9d30d3a0b652680b6c2a01b932a2a3d6a6703ec8b5 WatchSource:0}: Error finding container 41403a99098824f0aceccb9d30d3a0b652680b6c2a01b932a2a3d6a6703ec8b5: Status 404 returned error can't find the container with id 41403a99098824f0aceccb9d30d3a0b652680b6c2a01b932a2a3d6a6703ec8b5 Dec 02 15:12:44 crc kubenswrapper[4900]: I1202 15:12:44.076188 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9402-account-create-update-rvfbn"] Dec 02 15:12:44 crc kubenswrapper[4900]: I1202 15:12:44.125028 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9402-account-create-update-rvfbn" event={"ID":"c4759518-d3b9-4007-be35-dc9b410f3a84","Type":"ContainerStarted","Data":"41403a99098824f0aceccb9d30d3a0b652680b6c2a01b932a2a3d6a6703ec8b5"} Dec 02 15:12:44 crc kubenswrapper[4900]: I1202 15:12:44.126665 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xdzd5" event={"ID":"b417b22a-6ddb-4537-a954-80ed6f20ab40","Type":"ContainerStarted","Data":"a3e284a6d73d0559cabb1ce42d879c843b4109f5c8be44836c81ec7991189051"} Dec 02 15:12:45 crc kubenswrapper[4900]: I1202 15:12:45.135928 4900 generic.go:334] "Generic (PLEG): container finished" podID="c4759518-d3b9-4007-be35-dc9b410f3a84" containerID="fb58572bacff42ba442cba8b54a32d25469aa19716a4be3c10233cf09f3bd888" exitCode=0 Dec 02 15:12:45 crc kubenswrapper[4900]: I1202 15:12:45.136012 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9402-account-create-update-rvfbn" event={"ID":"c4759518-d3b9-4007-be35-dc9b410f3a84","Type":"ContainerDied","Data":"fb58572bacff42ba442cba8b54a32d25469aa19716a4be3c10233cf09f3bd888"} Dec 02 15:12:45 crc kubenswrapper[4900]: I1202 15:12:45.139781 4900 generic.go:334] "Generic (PLEG): container finished" podID="b417b22a-6ddb-4537-a954-80ed6f20ab40" containerID="c13ad78cec7772de4e1d11a116992e6d760febb7a46b19423b2a47db2f15dfc8" exitCode=0 Dec 02 15:12:45 crc kubenswrapper[4900]: I1202 15:12:45.139824 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xdzd5" event={"ID":"b417b22a-6ddb-4537-a954-80ed6f20ab40","Type":"ContainerDied","Data":"c13ad78cec7772de4e1d11a116992e6d760febb7a46b19423b2a47db2f15dfc8"} Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.588519 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.593448 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.652189 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmr2p\" (UniqueName: \"kubernetes.io/projected/b417b22a-6ddb-4537-a954-80ed6f20ab40-kube-api-access-qmr2p\") pod \"b417b22a-6ddb-4537-a954-80ed6f20ab40\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.652238 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b417b22a-6ddb-4537-a954-80ed6f20ab40-operator-scripts\") pod \"b417b22a-6ddb-4537-a954-80ed6f20ab40\" (UID: \"b417b22a-6ddb-4537-a954-80ed6f20ab40\") " Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.659738 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b417b22a-6ddb-4537-a954-80ed6f20ab40-kube-api-access-qmr2p" (OuterVolumeSpecName: "kube-api-access-qmr2p") pod "b417b22a-6ddb-4537-a954-80ed6f20ab40" (UID: "b417b22a-6ddb-4537-a954-80ed6f20ab40"). InnerVolumeSpecName "kube-api-access-qmr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.660855 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b417b22a-6ddb-4537-a954-80ed6f20ab40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b417b22a-6ddb-4537-a954-80ed6f20ab40" (UID: "b417b22a-6ddb-4537-a954-80ed6f20ab40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.753773 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4759518-d3b9-4007-be35-dc9b410f3a84-operator-scripts\") pod \"c4759518-d3b9-4007-be35-dc9b410f3a84\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.753849 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbrxq\" (UniqueName: \"kubernetes.io/projected/c4759518-d3b9-4007-be35-dc9b410f3a84-kube-api-access-tbrxq\") pod \"c4759518-d3b9-4007-be35-dc9b410f3a84\" (UID: \"c4759518-d3b9-4007-be35-dc9b410f3a84\") " Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.754273 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmr2p\" (UniqueName: \"kubernetes.io/projected/b417b22a-6ddb-4537-a954-80ed6f20ab40-kube-api-access-qmr2p\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.754293 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b417b22a-6ddb-4537-a954-80ed6f20ab40-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.754667 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4759518-d3b9-4007-be35-dc9b410f3a84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4759518-d3b9-4007-be35-dc9b410f3a84" (UID: "c4759518-d3b9-4007-be35-dc9b410f3a84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.756872 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4759518-d3b9-4007-be35-dc9b410f3a84-kube-api-access-tbrxq" (OuterVolumeSpecName: "kube-api-access-tbrxq") pod "c4759518-d3b9-4007-be35-dc9b410f3a84" (UID: "c4759518-d3b9-4007-be35-dc9b410f3a84"). InnerVolumeSpecName "kube-api-access-tbrxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.855998 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4759518-d3b9-4007-be35-dc9b410f3a84-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:46 crc kubenswrapper[4900]: I1202 15:12:46.856040 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbrxq\" (UniqueName: \"kubernetes.io/projected/c4759518-d3b9-4007-be35-dc9b410f3a84-kube-api-access-tbrxq\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:47 crc kubenswrapper[4900]: I1202 15:12:47.162821 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9402-account-create-update-rvfbn" event={"ID":"c4759518-d3b9-4007-be35-dc9b410f3a84","Type":"ContainerDied","Data":"41403a99098824f0aceccb9d30d3a0b652680b6c2a01b932a2a3d6a6703ec8b5"} Dec 02 15:12:47 crc kubenswrapper[4900]: I1202 15:12:47.162865 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9402-account-create-update-rvfbn" Dec 02 15:12:47 crc kubenswrapper[4900]: I1202 15:12:47.162900 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41403a99098824f0aceccb9d30d3a0b652680b6c2a01b932a2a3d6a6703ec8b5" Dec 02 15:12:47 crc kubenswrapper[4900]: I1202 15:12:47.164454 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xdzd5" event={"ID":"b417b22a-6ddb-4537-a954-80ed6f20ab40","Type":"ContainerDied","Data":"a3e284a6d73d0559cabb1ce42d879c843b4109f5c8be44836c81ec7991189051"} Dec 02 15:12:47 crc kubenswrapper[4900]: I1202 15:12:47.164507 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e284a6d73d0559cabb1ce42d879c843b4109f5c8be44836c81ec7991189051" Dec 02 15:12:47 crc kubenswrapper[4900]: I1202 15:12:47.164556 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xdzd5" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.622067 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2jbhb"] Dec 02 15:12:48 crc kubenswrapper[4900]: E1202 15:12:48.622725 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4759518-d3b9-4007-be35-dc9b410f3a84" containerName="mariadb-account-create-update" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.622744 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4759518-d3b9-4007-be35-dc9b410f3a84" containerName="mariadb-account-create-update" Dec 02 15:12:48 crc kubenswrapper[4900]: E1202 15:12:48.622765 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b417b22a-6ddb-4537-a954-80ed6f20ab40" containerName="mariadb-database-create" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.622773 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b417b22a-6ddb-4537-a954-80ed6f20ab40" containerName="mariadb-database-create" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.622971 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b417b22a-6ddb-4537-a954-80ed6f20ab40" containerName="mariadb-database-create" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.622997 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4759518-d3b9-4007-be35-dc9b410f3a84" containerName="mariadb-account-create-update" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.623509 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.625479 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-k42lh" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.625683 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.638550 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2jbhb"] Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.689684 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-db-sync-config-data\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.689928 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-combined-ca-bundle\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.690075 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xptv6\" (UniqueName: \"kubernetes.io/projected/60c641e8-f462-45b4-9626-297966f19298-kube-api-access-xptv6\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.792192 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xptv6\" (UniqueName: \"kubernetes.io/projected/60c641e8-f462-45b4-9626-297966f19298-kube-api-access-xptv6\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.792566 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-db-sync-config-data\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.792799 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-combined-ca-bundle\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.798002 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-db-sync-config-data\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.798359 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-combined-ca-bundle\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.812934 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xptv6\" (UniqueName: \"kubernetes.io/projected/60c641e8-f462-45b4-9626-297966f19298-kube-api-access-xptv6\") pod \"barbican-db-sync-2jbhb\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:48 crc kubenswrapper[4900]: I1202 15:12:48.939838 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:49 crc kubenswrapper[4900]: I1202 15:12:49.431130 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2jbhb"] Dec 02 15:12:50 crc kubenswrapper[4900]: I1202 15:12:50.190436 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jbhb" event={"ID":"60c641e8-f462-45b4-9626-297966f19298","Type":"ContainerStarted","Data":"49d9a2a3e82197544076cf9dca3e24961c69e9dead8f2ada55e767769b5f3d6e"} Dec 02 15:12:50 crc kubenswrapper[4900]: I1202 15:12:50.190868 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jbhb" event={"ID":"60c641e8-f462-45b4-9626-297966f19298","Type":"ContainerStarted","Data":"fbde9369f82cdc898c70b7b9809ee9748bb3c406fb6ed30bcc2e5dcc4ad13c68"} Dec 02 15:12:50 crc kubenswrapper[4900]: I1202 15:12:50.217426 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2jbhb" podStartSLOduration=2.217397214 podStartE2EDuration="2.217397214s" podCreationTimestamp="2025-12-02 15:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:12:50.206660973 +0000 UTC m=+5415.622474834" watchObservedRunningTime="2025-12-02 15:12:50.217397214 +0000 UTC m=+5415.633211085" Dec 02 15:12:51 crc kubenswrapper[4900]: I1202 15:12:51.202740 4900 generic.go:334] "Generic (PLEG): container finished" podID="60c641e8-f462-45b4-9626-297966f19298" containerID="49d9a2a3e82197544076cf9dca3e24961c69e9dead8f2ada55e767769b5f3d6e" exitCode=0 Dec 02 15:12:51 crc kubenswrapper[4900]: I1202 15:12:51.204073 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jbhb" event={"ID":"60c641e8-f462-45b4-9626-297966f19298","Type":"ContainerDied","Data":"49d9a2a3e82197544076cf9dca3e24961c69e9dead8f2ada55e767769b5f3d6e"} Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.742804 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.874590 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xptv6\" (UniqueName: \"kubernetes.io/projected/60c641e8-f462-45b4-9626-297966f19298-kube-api-access-xptv6\") pod \"60c641e8-f462-45b4-9626-297966f19298\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.874740 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-db-sync-config-data\") pod \"60c641e8-f462-45b4-9626-297966f19298\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.874841 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-combined-ca-bundle\") pod \"60c641e8-f462-45b4-9626-297966f19298\" (UID: \"60c641e8-f462-45b4-9626-297966f19298\") " Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.881226 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c641e8-f462-45b4-9626-297966f19298-kube-api-access-xptv6" (OuterVolumeSpecName: "kube-api-access-xptv6") pod "60c641e8-f462-45b4-9626-297966f19298" (UID: "60c641e8-f462-45b4-9626-297966f19298"). InnerVolumeSpecName "kube-api-access-xptv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.881794 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "60c641e8-f462-45b4-9626-297966f19298" (UID: "60c641e8-f462-45b4-9626-297966f19298"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.903433 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60c641e8-f462-45b4-9626-297966f19298" (UID: "60c641e8-f462-45b4-9626-297966f19298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.976411 4900 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.976446 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c641e8-f462-45b4-9626-297966f19298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:52 crc kubenswrapper[4900]: I1202 15:12:52.976456 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xptv6\" (UniqueName: \"kubernetes.io/projected/60c641e8-f462-45b4-9626-297966f19298-kube-api-access-xptv6\") on node \"crc\" DevicePath \"\"" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.230483 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2jbhb" event={"ID":"60c641e8-f462-45b4-9626-297966f19298","Type":"ContainerDied","Data":"fbde9369f82cdc898c70b7b9809ee9748bb3c406fb6ed30bcc2e5dcc4ad13c68"} Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.230557 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbde9369f82cdc898c70b7b9809ee9748bb3c406fb6ed30bcc2e5dcc4ad13c68" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.230721 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2jbhb" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.422846 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl"] Dec 02 15:12:53 crc kubenswrapper[4900]: E1202 15:12:53.423234 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c641e8-f462-45b4-9626-297966f19298" containerName="barbican-db-sync" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.423259 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c641e8-f462-45b4-9626-297966f19298" containerName="barbican-db-sync" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.423421 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c641e8-f462-45b4-9626-297966f19298" containerName="barbican-db-sync" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.424268 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.426891 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-k42lh" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.427099 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.428363 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.431619 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78cd9cf4b9-48dtr"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.433026 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.443690 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.446127 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.465431 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78cd9cf4b9-48dtr"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.535726 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4f7856c-8t6l5"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.537539 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.542258 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4f7856c-8t6l5"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589325 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-config-data-custom\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589399 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg58c\" (UniqueName: \"kubernetes.io/projected/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-kube-api-access-rg58c\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589429 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-config-data\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589456 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288c6474-d114-42d9-8030-43ca582fd106-logs\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589497 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-config-data\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-combined-ca-bundle\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589589 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-combined-ca-bundle\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589626 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-config-data-custom\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589673 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2szh\" (UniqueName: \"kubernetes.io/projected/288c6474-d114-42d9-8030-43ca582fd106-kube-api-access-t2szh\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.589700 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-logs\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.645455 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bcc48fd58-tt2tj"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.647376 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.649344 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.662126 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bcc48fd58-tt2tj"] Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.692770 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-config-data-custom\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.692825 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2szh\" (UniqueName: \"kubernetes.io/projected/288c6474-d114-42d9-8030-43ca582fd106-kube-api-access-t2szh\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.692863 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-logs\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.692910 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.692938 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-config-data-custom\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.692975 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg58c\" (UniqueName: \"kubernetes.io/projected/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-kube-api-access-rg58c\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693000 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-config-data\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693031 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288c6474-d114-42d9-8030-43ca582fd106-logs\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693063 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-config-data\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693084 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693120 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-dns-svc\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-combined-ca-bundle\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693166 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbzb\" (UniqueName: \"kubernetes.io/projected/f682a100-9c64-45b0-97f1-e896fb0b9fe8-kube-api-access-hbbzb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693196 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-config\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.693216 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-combined-ca-bundle\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.694266 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288c6474-d114-42d9-8030-43ca582fd106-logs\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.694431 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-logs\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.697689 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-config-data\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.698615 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-combined-ca-bundle\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.702170 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-combined-ca-bundle\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.705392 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-config-data-custom\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.705482 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-config-data-custom\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.716965 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288c6474-d114-42d9-8030-43ca582fd106-config-data\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.721831 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg58c\" (UniqueName: \"kubernetes.io/projected/4d8522c3-bc85-4af9-aab5-8e610f1af1e0-kube-api-access-rg58c\") pod \"barbican-worker-78cd9cf4b9-48dtr\" (UID: \"4d8522c3-bc85-4af9-aab5-8e610f1af1e0\") " pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.727692 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2szh\" (UniqueName: \"kubernetes.io/projected/288c6474-d114-42d9-8030-43ca582fd106-kube-api-access-t2szh\") pod \"barbican-keystone-listener-7cb9f45f4b-8bgkl\" (UID: \"288c6474-d114-42d9-8030-43ca582fd106\") " pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.750024 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.759818 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78cd9cf4b9-48dtr" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.794308 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-combined-ca-bundle\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.794383 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.794420 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-config-data\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.794457 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-dns-svc\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.794681 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbzb\" (UniqueName: \"kubernetes.io/projected/f682a100-9c64-45b0-97f1-e896fb0b9fe8-kube-api-access-hbbzb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795129 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-config\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795193 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-logs\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795274 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-config-data-custom\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795402 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbtl\" (UniqueName: \"kubernetes.io/projected/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-kube-api-access-qbbtl\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795709 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795883 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-config\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795896 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.795945 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-dns-svc\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.796632 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.815403 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbzb\" (UniqueName: \"kubernetes.io/projected/f682a100-9c64-45b0-97f1-e896fb0b9fe8-kube-api-access-hbbzb\") pod \"dnsmasq-dns-6bb4f7856c-8t6l5\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.880778 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.897735 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbbtl\" (UniqueName: \"kubernetes.io/projected/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-kube-api-access-qbbtl\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.897817 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-combined-ca-bundle\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.897853 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-config-data\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.897892 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-logs\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.897916 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-config-data-custom\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.901054 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-logs\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.903081 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-config-data-custom\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.905080 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-combined-ca-bundle\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.909677 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-config-data\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.922967 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbbtl\" (UniqueName: \"kubernetes.io/projected/a2a936b2-ff27-4972-8238-f4cc6b2e1b63-kube-api-access-qbbtl\") pod \"barbican-api-6bcc48fd58-tt2tj\" (UID: \"a2a936b2-ff27-4972-8238-f4cc6b2e1b63\") " pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:53 crc kubenswrapper[4900]: I1202 15:12:53.962977 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:54 crc kubenswrapper[4900]: I1202 15:12:54.148903 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78cd9cf4b9-48dtr"] Dec 02 15:12:54 crc kubenswrapper[4900]: W1202 15:12:54.178580 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d8522c3_bc85_4af9_aab5_8e610f1af1e0.slice/crio-ba511a76839604d0e708b8bef4ddaa2a2950669756be025d3472aeb3a2ce7c8d WatchSource:0}: Error finding container ba511a76839604d0e708b8bef4ddaa2a2950669756be025d3472aeb3a2ce7c8d: Status 404 returned error can't find the container with id ba511a76839604d0e708b8bef4ddaa2a2950669756be025d3472aeb3a2ce7c8d Dec 02 15:12:54 crc kubenswrapper[4900]: I1202 15:12:54.242029 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl"] Dec 02 15:12:54 crc kubenswrapper[4900]: I1202 15:12:54.245224 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78cd9cf4b9-48dtr" event={"ID":"4d8522c3-bc85-4af9-aab5-8e610f1af1e0","Type":"ContainerStarted","Data":"ba511a76839604d0e708b8bef4ddaa2a2950669756be025d3472aeb3a2ce7c8d"} Dec 02 15:12:54 crc kubenswrapper[4900]: W1202 15:12:54.254582 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod288c6474_d114_42d9_8030_43ca582fd106.slice/crio-5e712c7dc5d3b3e44f8ecb533d9294885c19d42b41a9c2f0fe3343c80fe8a4fa WatchSource:0}: Error finding container 5e712c7dc5d3b3e44f8ecb533d9294885c19d42b41a9c2f0fe3343c80fe8a4fa: Status 404 returned error can't find the container with id 5e712c7dc5d3b3e44f8ecb533d9294885c19d42b41a9c2f0fe3343c80fe8a4fa Dec 02 15:12:54 crc kubenswrapper[4900]: I1202 15:12:54.428935 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4f7856c-8t6l5"] Dec 02 15:12:54 crc kubenswrapper[4900]: W1202 15:12:54.436918 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf682a100_9c64_45b0_97f1_e896fb0b9fe8.slice/crio-8d3dcf3bc7336716cef8f1c7e9eb2756cab33bd168e2afc4371aa3d7be5e2e9b WatchSource:0}: Error finding container 8d3dcf3bc7336716cef8f1c7e9eb2756cab33bd168e2afc4371aa3d7be5e2e9b: Status 404 returned error can't find the container with id 8d3dcf3bc7336716cef8f1c7e9eb2756cab33bd168e2afc4371aa3d7be5e2e9b Dec 02 15:12:54 crc kubenswrapper[4900]: I1202 15:12:54.550153 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bcc48fd58-tt2tj"] Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.255252 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" event={"ID":"288c6474-d114-42d9-8030-43ca582fd106","Type":"ContainerStarted","Data":"12fbbc56b509cce483c823d515fc814f14513eafd4974c0c0870df385cff811b"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.255587 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" event={"ID":"288c6474-d114-42d9-8030-43ca582fd106","Type":"ContainerStarted","Data":"2b2327a4bc8feaa3d4afa199820fe2ae79b801b7d64409fb96139a3f119fc7ab"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.255599 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" event={"ID":"288c6474-d114-42d9-8030-43ca582fd106","Type":"ContainerStarted","Data":"5e712c7dc5d3b3e44f8ecb533d9294885c19d42b41a9c2f0fe3343c80fe8a4fa"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.257305 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78cd9cf4b9-48dtr" event={"ID":"4d8522c3-bc85-4af9-aab5-8e610f1af1e0","Type":"ContainerStarted","Data":"61bcb8a6b5af12729b27d3aa8c04181c6e31f8ad0a59162e43f716a7f73fdd36"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.257325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78cd9cf4b9-48dtr" event={"ID":"4d8522c3-bc85-4af9-aab5-8e610f1af1e0","Type":"ContainerStarted","Data":"f0e89b2e53476f3ff059e01ba6f0fd5c08a45c6cf64d60d73180b7523de8b699"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.259724 4900 generic.go:334] "Generic (PLEG): container finished" podID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerID="55bd410d4a2d14c639587c8b021b11f0faa95be7f17c21908a0c981eee9375bb" exitCode=0 Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.259760 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" event={"ID":"f682a100-9c64-45b0-97f1-e896fb0b9fe8","Type":"ContainerDied","Data":"55bd410d4a2d14c639587c8b021b11f0faa95be7f17c21908a0c981eee9375bb"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.259774 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" event={"ID":"f682a100-9c64-45b0-97f1-e896fb0b9fe8","Type":"ContainerStarted","Data":"8d3dcf3bc7336716cef8f1c7e9eb2756cab33bd168e2afc4371aa3d7be5e2e9b"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.266971 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcc48fd58-tt2tj" event={"ID":"a2a936b2-ff27-4972-8238-f4cc6b2e1b63","Type":"ContainerStarted","Data":"e03cea9297f33fb44dfc93b0c974f51df3186da6594f1c12751f80cce340fbc4"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.267006 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcc48fd58-tt2tj" event={"ID":"a2a936b2-ff27-4972-8238-f4cc6b2e1b63","Type":"ContainerStarted","Data":"d1eb074113cf06e4719df4f13e7d13cc12e72ce7c9870029de4872611bfb6e86"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.267020 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcc48fd58-tt2tj" event={"ID":"a2a936b2-ff27-4972-8238-f4cc6b2e1b63","Type":"ContainerStarted","Data":"3b508a80bfd6dce0f73dd069dce2de174fdbcd635b4fe2708ddfc767a59a3a59"} Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.267256 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.267370 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.270980 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7cb9f45f4b-8bgkl" podStartSLOduration=2.270966217 podStartE2EDuration="2.270966217s" podCreationTimestamp="2025-12-02 15:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:12:55.269119655 +0000 UTC m=+5420.684933506" watchObservedRunningTime="2025-12-02 15:12:55.270966217 +0000 UTC m=+5420.686780068" Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.304240 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bcc48fd58-tt2tj" podStartSLOduration=2.30421982 podStartE2EDuration="2.30421982s" podCreationTimestamp="2025-12-02 15:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:12:55.294463287 +0000 UTC m=+5420.710277158" watchObservedRunningTime="2025-12-02 15:12:55.30421982 +0000 UTC m=+5420.720033671" Dec 02 15:12:55 crc kubenswrapper[4900]: I1202 15:12:55.325172 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78cd9cf4b9-48dtr" podStartSLOduration=2.325150788 podStartE2EDuration="2.325150788s" podCreationTimestamp="2025-12-02 15:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:12:55.317214675 +0000 UTC m=+5420.733028526" watchObservedRunningTime="2025-12-02 15:12:55.325150788 +0000 UTC m=+5420.740964639" Dec 02 15:12:56 crc kubenswrapper[4900]: I1202 15:12:56.276978 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" event={"ID":"f682a100-9c64-45b0-97f1-e896fb0b9fe8","Type":"ContainerStarted","Data":"982ebc35218e99919d6191c10cd75f441ed4a42ee5c7787b236bab706c1a00a9"} Dec 02 15:12:56 crc kubenswrapper[4900]: I1202 15:12:56.278191 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:12:56 crc kubenswrapper[4900]: I1202 15:12:56.301990 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" podStartSLOduration=3.30197044 podStartE2EDuration="3.30197044s" podCreationTimestamp="2025-12-02 15:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:12:56.293581045 +0000 UTC m=+5421.709394896" watchObservedRunningTime="2025-12-02 15:12:56.30197044 +0000 UTC m=+5421.717784281" Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.672178 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sd5b5"] Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.675809 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.686033 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sd5b5"] Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.861718 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-utilities\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.861820 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nwt\" (UniqueName: \"kubernetes.io/projected/cad6fd34-4fe6-42af-860e-d619e1ec6709-kube-api-access-s2nwt\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.862139 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-catalog-content\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:01 crc kubenswrapper[4900]: I1202 15:13:01.963661 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-utilities\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:01.963775 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nwt\" (UniqueName: \"kubernetes.io/projected/cad6fd34-4fe6-42af-860e-d619e1ec6709-kube-api-access-s2nwt\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:01.963837 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-catalog-content\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:01.964478 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-catalog-content\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:01.964673 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-utilities\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:01.984794 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nwt\" (UniqueName: \"kubernetes.io/projected/cad6fd34-4fe6-42af-860e-d619e1ec6709-kube-api-access-s2nwt\") pod \"redhat-operators-sd5b5\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:02.015038 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:02 crc kubenswrapper[4900]: I1202 15:13:02.515535 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sd5b5"] Dec 02 15:13:03 crc kubenswrapper[4900]: I1202 15:13:03.359768 4900 generic.go:334] "Generic (PLEG): container finished" podID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerID="8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2" exitCode=0 Dec 02 15:13:03 crc kubenswrapper[4900]: I1202 15:13:03.359843 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd5b5" event={"ID":"cad6fd34-4fe6-42af-860e-d619e1ec6709","Type":"ContainerDied","Data":"8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2"} Dec 02 15:13:03 crc kubenswrapper[4900]: I1202 15:13:03.360058 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd5b5" event={"ID":"cad6fd34-4fe6-42af-860e-d619e1ec6709","Type":"ContainerStarted","Data":"6d378e692e040f20a204a0f76753b4781def4a86555efca692adef0b3684547e"} Dec 02 15:13:03 crc kubenswrapper[4900]: I1202 15:13:03.882790 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:13:03 crc kubenswrapper[4900]: I1202 15:13:03.938364 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7f8f447f-gtg29"] Dec 02 15:13:03 crc kubenswrapper[4900]: I1202 15:13:03.938613 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerName="dnsmasq-dns" containerID="cri-o://f83da3bfe3e054a647e28fe56c91482130ac54310196fa2fb2e1e07e5e3b0ef7" gracePeriod=10 Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.370231 4900 generic.go:334] "Generic (PLEG): container finished" podID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerID="f83da3bfe3e054a647e28fe56c91482130ac54310196fa2fb2e1e07e5e3b0ef7" exitCode=0 Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.370311 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" event={"ID":"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4","Type":"ContainerDied","Data":"f83da3bfe3e054a647e28fe56c91482130ac54310196fa2fb2e1e07e5e3b0ef7"} Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.370540 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" event={"ID":"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4","Type":"ContainerDied","Data":"04cbf9f3ab7625c0e1c7244de4d7690ec975881e7d3bd865b9104cda5a9f7d34"} Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.370563 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04cbf9f3ab7625c0e1c7244de4d7690ec975881e7d3bd865b9104cda5a9f7d34" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.409682 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.511139 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-sb\") pod \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.511191 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-nb\") pod \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.511249 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-config\") pod \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.511317 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-dns-svc\") pod \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.511356 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcpb6\" (UniqueName: \"kubernetes.io/projected/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-kube-api-access-rcpb6\") pod \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\" (UID: \"d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4\") " Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.519920 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-kube-api-access-rcpb6" (OuterVolumeSpecName: "kube-api-access-rcpb6") pod "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" (UID: "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4"). InnerVolumeSpecName "kube-api-access-rcpb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.550609 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-config" (OuterVolumeSpecName: "config") pod "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" (UID: "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.552777 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" (UID: "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.556991 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" (UID: "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.559138 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" (UID: "d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.613794 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.613824 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.613835 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcpb6\" (UniqueName: \"kubernetes.io/projected/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-kube-api-access-rcpb6\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.613846 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:04 crc kubenswrapper[4900]: I1202 15:13:04.613854 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.381871 4900 generic.go:334] "Generic (PLEG): container finished" podID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerID="454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f" exitCode=0 Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.381952 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd5b5" event={"ID":"cad6fd34-4fe6-42af-860e-d619e1ec6709","Type":"ContainerDied","Data":"454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f"} Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.382227 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7f8f447f-gtg29" Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.426766 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7f8f447f-gtg29"] Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.433897 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7f8f447f-gtg29"] Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.464990 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:13:05 crc kubenswrapper[4900]: I1202 15:13:05.483460 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bcc48fd58-tt2tj" Dec 02 15:13:06 crc kubenswrapper[4900]: I1202 15:13:06.405607 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd5b5" event={"ID":"cad6fd34-4fe6-42af-860e-d619e1ec6709","Type":"ContainerStarted","Data":"a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71"} Dec 02 15:13:06 crc kubenswrapper[4900]: I1202 15:13:06.436933 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sd5b5" podStartSLOduration=2.847373346 podStartE2EDuration="5.436894898s" podCreationTimestamp="2025-12-02 15:13:01 +0000 UTC" firstStartedPulling="2025-12-02 15:13:03.36223277 +0000 UTC m=+5428.778046651" lastFinishedPulling="2025-12-02 15:13:05.951754312 +0000 UTC m=+5431.367568203" observedRunningTime="2025-12-02 15:13:06.429294244 +0000 UTC m=+5431.845108135" watchObservedRunningTime="2025-12-02 15:13:06.436894898 +0000 UTC m=+5431.852708749" Dec 02 15:13:06 crc kubenswrapper[4900]: I1202 15:13:06.922552 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" path="/var/lib/kubelet/pods/d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4/volumes" Dec 02 15:13:12 crc kubenswrapper[4900]: I1202 15:13:12.016365 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:12 crc kubenswrapper[4900]: I1202 15:13:12.018154 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:13 crc kubenswrapper[4900]: I1202 15:13:13.088209 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sd5b5" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="registry-server" probeResult="failure" output=< Dec 02 15:13:13 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 15:13:13 crc kubenswrapper[4900]: > Dec 02 15:13:17 crc kubenswrapper[4900]: I1202 15:13:17.977834 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4l8fr"] Dec 02 15:13:17 crc kubenswrapper[4900]: E1202 15:13:17.978755 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerName="init" Dec 02 15:13:17 crc kubenswrapper[4900]: I1202 15:13:17.978778 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerName="init" Dec 02 15:13:17 crc kubenswrapper[4900]: E1202 15:13:17.978830 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerName="dnsmasq-dns" Dec 02 15:13:17 crc kubenswrapper[4900]: I1202 15:13:17.978843 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerName="dnsmasq-dns" Dec 02 15:13:17 crc kubenswrapper[4900]: I1202 15:13:17.979128 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bdb5d8-7b90-4ebf-87ee-af59b331a8a4" containerName="dnsmasq-dns" Dec 02 15:13:17 crc kubenswrapper[4900]: I1202 15:13:17.980136 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:17 crc kubenswrapper[4900]: I1202 15:13:17.988125 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4l8fr"] Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.062477 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82105e45-2a76-4957-af32-bed10436bcff-operator-scripts\") pod \"neutron-db-create-4l8fr\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.062545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqq6\" (UniqueName: \"kubernetes.io/projected/82105e45-2a76-4957-af32-bed10436bcff-kube-api-access-6pqq6\") pod \"neutron-db-create-4l8fr\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.084857 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b13d-account-create-update-xmbc8"] Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.085972 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.089360 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.096394 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b13d-account-create-update-xmbc8"] Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.163584 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebc034-0549-46ca-b553-739e5317d5ff-operator-scripts\") pod \"neutron-b13d-account-create-update-xmbc8\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.163680 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82105e45-2a76-4957-af32-bed10436bcff-operator-scripts\") pod \"neutron-db-create-4l8fr\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.163718 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwpkm\" (UniqueName: \"kubernetes.io/projected/48ebc034-0549-46ca-b553-739e5317d5ff-kube-api-access-bwpkm\") pod \"neutron-b13d-account-create-update-xmbc8\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.163826 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pqq6\" (UniqueName: \"kubernetes.io/projected/82105e45-2a76-4957-af32-bed10436bcff-kube-api-access-6pqq6\") pod \"neutron-db-create-4l8fr\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.164679 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82105e45-2a76-4957-af32-bed10436bcff-operator-scripts\") pod \"neutron-db-create-4l8fr\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.180052 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pqq6\" (UniqueName: \"kubernetes.io/projected/82105e45-2a76-4957-af32-bed10436bcff-kube-api-access-6pqq6\") pod \"neutron-db-create-4l8fr\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.265890 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebc034-0549-46ca-b553-739e5317d5ff-operator-scripts\") pod \"neutron-b13d-account-create-update-xmbc8\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.266250 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwpkm\" (UniqueName: \"kubernetes.io/projected/48ebc034-0549-46ca-b553-739e5317d5ff-kube-api-access-bwpkm\") pod \"neutron-b13d-account-create-update-xmbc8\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.266607 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebc034-0549-46ca-b553-739e5317d5ff-operator-scripts\") pod \"neutron-b13d-account-create-update-xmbc8\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.281414 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwpkm\" (UniqueName: \"kubernetes.io/projected/48ebc034-0549-46ca-b553-739e5317d5ff-kube-api-access-bwpkm\") pod \"neutron-b13d-account-create-update-xmbc8\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.375965 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.408211 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.864837 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4l8fr"] Dec 02 15:13:18 crc kubenswrapper[4900]: W1202 15:13:18.926489 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ebc034_0549_46ca_b553_739e5317d5ff.slice/crio-a7102737b30babd68a258caff6c4a3a670ba4611a6a9951b8a4e38ab00ae400c WatchSource:0}: Error finding container a7102737b30babd68a258caff6c4a3a670ba4611a6a9951b8a4e38ab00ae400c: Status 404 returned error can't find the container with id a7102737b30babd68a258caff6c4a3a670ba4611a6a9951b8a4e38ab00ae400c Dec 02 15:13:18 crc kubenswrapper[4900]: I1202 15:13:18.934671 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b13d-account-create-update-xmbc8"] Dec 02 15:13:19 crc kubenswrapper[4900]: I1202 15:13:19.559392 4900 generic.go:334] "Generic (PLEG): container finished" podID="82105e45-2a76-4957-af32-bed10436bcff" containerID="8c9b389eb786ea66b0ff9aee7f2766519257029c9dbe8640949564c29dd126de" exitCode=0 Dec 02 15:13:19 crc kubenswrapper[4900]: I1202 15:13:19.559471 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4l8fr" event={"ID":"82105e45-2a76-4957-af32-bed10436bcff","Type":"ContainerDied","Data":"8c9b389eb786ea66b0ff9aee7f2766519257029c9dbe8640949564c29dd126de"} Dec 02 15:13:19 crc kubenswrapper[4900]: I1202 15:13:19.559938 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4l8fr" event={"ID":"82105e45-2a76-4957-af32-bed10436bcff","Type":"ContainerStarted","Data":"df627ec1f8c6e1847296c451e0a6a445a814facef198670617b159afcabbd865"} Dec 02 15:13:19 crc kubenswrapper[4900]: I1202 15:13:19.565847 4900 generic.go:334] "Generic (PLEG): container finished" podID="48ebc034-0549-46ca-b553-739e5317d5ff" containerID="a8258420d5640e0952b59fa881c36fb46769ede6a5e160daa15f31a36458d586" exitCode=0 Dec 02 15:13:19 crc kubenswrapper[4900]: I1202 15:13:19.565914 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b13d-account-create-update-xmbc8" event={"ID":"48ebc034-0549-46ca-b553-739e5317d5ff","Type":"ContainerDied","Data":"a8258420d5640e0952b59fa881c36fb46769ede6a5e160daa15f31a36458d586"} Dec 02 15:13:19 crc kubenswrapper[4900]: I1202 15:13:19.565956 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b13d-account-create-update-xmbc8" event={"ID":"48ebc034-0549-46ca-b553-739e5317d5ff","Type":"ContainerStarted","Data":"a7102737b30babd68a258caff6c4a3a670ba4611a6a9951b8a4e38ab00ae400c"} Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.041841 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.051433 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.115860 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebc034-0549-46ca-b553-739e5317d5ff-operator-scripts\") pod \"48ebc034-0549-46ca-b553-739e5317d5ff\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.116017 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82105e45-2a76-4957-af32-bed10436bcff-operator-scripts\") pod \"82105e45-2a76-4957-af32-bed10436bcff\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.116109 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwpkm\" (UniqueName: \"kubernetes.io/projected/48ebc034-0549-46ca-b553-739e5317d5ff-kube-api-access-bwpkm\") pod \"48ebc034-0549-46ca-b553-739e5317d5ff\" (UID: \"48ebc034-0549-46ca-b553-739e5317d5ff\") " Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.116165 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pqq6\" (UniqueName: \"kubernetes.io/projected/82105e45-2a76-4957-af32-bed10436bcff-kube-api-access-6pqq6\") pod \"82105e45-2a76-4957-af32-bed10436bcff\" (UID: \"82105e45-2a76-4957-af32-bed10436bcff\") " Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.116602 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82105e45-2a76-4957-af32-bed10436bcff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82105e45-2a76-4957-af32-bed10436bcff" (UID: "82105e45-2a76-4957-af32-bed10436bcff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.116715 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ebc034-0549-46ca-b553-739e5317d5ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ebc034-0549-46ca-b553-739e5317d5ff" (UID: "48ebc034-0549-46ca-b553-739e5317d5ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.121817 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ebc034-0549-46ca-b553-739e5317d5ff-kube-api-access-bwpkm" (OuterVolumeSpecName: "kube-api-access-bwpkm") pod "48ebc034-0549-46ca-b553-739e5317d5ff" (UID: "48ebc034-0549-46ca-b553-739e5317d5ff"). InnerVolumeSpecName "kube-api-access-bwpkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.131876 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82105e45-2a76-4957-af32-bed10436bcff-kube-api-access-6pqq6" (OuterVolumeSpecName: "kube-api-access-6pqq6") pod "82105e45-2a76-4957-af32-bed10436bcff" (UID: "82105e45-2a76-4957-af32-bed10436bcff"). InnerVolumeSpecName "kube-api-access-6pqq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.218285 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82105e45-2a76-4957-af32-bed10436bcff-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.218368 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwpkm\" (UniqueName: \"kubernetes.io/projected/48ebc034-0549-46ca-b553-739e5317d5ff-kube-api-access-bwpkm\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.218381 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pqq6\" (UniqueName: \"kubernetes.io/projected/82105e45-2a76-4957-af32-bed10436bcff-kube-api-access-6pqq6\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.218392 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ebc034-0549-46ca-b553-739e5317d5ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.592892 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4l8fr" event={"ID":"82105e45-2a76-4957-af32-bed10436bcff","Type":"ContainerDied","Data":"df627ec1f8c6e1847296c451e0a6a445a814facef198670617b159afcabbd865"} Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.592945 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df627ec1f8c6e1847296c451e0a6a445a814facef198670617b159afcabbd865" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.592974 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4l8fr" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.595203 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b13d-account-create-update-xmbc8" event={"ID":"48ebc034-0549-46ca-b553-739e5317d5ff","Type":"ContainerDied","Data":"a7102737b30babd68a258caff6c4a3a670ba4611a6a9951b8a4e38ab00ae400c"} Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.595272 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7102737b30babd68a258caff6c4a3a670ba4611a6a9951b8a4e38ab00ae400c" Dec 02 15:13:21 crc kubenswrapper[4900]: I1202 15:13:21.595229 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b13d-account-create-update-xmbc8" Dec 02 15:13:22 crc kubenswrapper[4900]: I1202 15:13:22.103756 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:22 crc kubenswrapper[4900]: I1202 15:13:22.165406 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:22 crc kubenswrapper[4900]: I1202 15:13:22.347770 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sd5b5"] Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.397731 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qj5hh"] Dec 02 15:13:23 crc kubenswrapper[4900]: E1202 15:13:23.398111 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ebc034-0549-46ca-b553-739e5317d5ff" containerName="mariadb-account-create-update" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.398130 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ebc034-0549-46ca-b553-739e5317d5ff" containerName="mariadb-account-create-update" Dec 02 15:13:23 crc kubenswrapper[4900]: E1202 15:13:23.398161 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82105e45-2a76-4957-af32-bed10436bcff" containerName="mariadb-database-create" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.398169 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="82105e45-2a76-4957-af32-bed10436bcff" containerName="mariadb-database-create" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.398339 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="82105e45-2a76-4957-af32-bed10436bcff" containerName="mariadb-database-create" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.398356 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ebc034-0549-46ca-b553-739e5317d5ff" containerName="mariadb-account-create-update" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.398984 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.404084 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.404107 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.404204 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-shmlw" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.438812 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qj5hh"] Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.460463 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-combined-ca-bundle\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.460596 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-config\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.460688 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5pp\" (UniqueName: \"kubernetes.io/projected/1bcbe4ce-733f-4352-9564-826b6113b4dd-kube-api-access-jt5pp\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.562202 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-config\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.562261 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5pp\" (UniqueName: \"kubernetes.io/projected/1bcbe4ce-733f-4352-9564-826b6113b4dd-kube-api-access-jt5pp\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.562326 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-combined-ca-bundle\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.568305 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-combined-ca-bundle\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.568389 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-config\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.578890 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5pp\" (UniqueName: \"kubernetes.io/projected/1bcbe4ce-733f-4352-9564-826b6113b4dd-kube-api-access-jt5pp\") pod \"neutron-db-sync-qj5hh\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.610298 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sd5b5" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="registry-server" containerID="cri-o://a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71" gracePeriod=2 Dec 02 15:13:23 crc kubenswrapper[4900]: I1202 15:13:23.719045 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.106992 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.176573 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2nwt\" (UniqueName: \"kubernetes.io/projected/cad6fd34-4fe6-42af-860e-d619e1ec6709-kube-api-access-s2nwt\") pod \"cad6fd34-4fe6-42af-860e-d619e1ec6709\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.176656 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-utilities\") pod \"cad6fd34-4fe6-42af-860e-d619e1ec6709\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.176693 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-catalog-content\") pod \"cad6fd34-4fe6-42af-860e-d619e1ec6709\" (UID: \"cad6fd34-4fe6-42af-860e-d619e1ec6709\") " Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.179568 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-utilities" (OuterVolumeSpecName: "utilities") pod "cad6fd34-4fe6-42af-860e-d619e1ec6709" (UID: "cad6fd34-4fe6-42af-860e-d619e1ec6709"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.185315 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad6fd34-4fe6-42af-860e-d619e1ec6709-kube-api-access-s2nwt" (OuterVolumeSpecName: "kube-api-access-s2nwt") pod "cad6fd34-4fe6-42af-860e-d619e1ec6709" (UID: "cad6fd34-4fe6-42af-860e-d619e1ec6709"). InnerVolumeSpecName "kube-api-access-s2nwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.262294 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qj5hh"] Dec 02 15:13:24 crc kubenswrapper[4900]: W1202 15:13:24.262619 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcbe4ce_733f_4352_9564_826b6113b4dd.slice/crio-9300b2ccc741af6898a5141a83978fc04e69255e47d5ee7bb1ebf980a1566fb4 WatchSource:0}: Error finding container 9300b2ccc741af6898a5141a83978fc04e69255e47d5ee7bb1ebf980a1566fb4: Status 404 returned error can't find the container with id 9300b2ccc741af6898a5141a83978fc04e69255e47d5ee7bb1ebf980a1566fb4 Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.279455 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2nwt\" (UniqueName: \"kubernetes.io/projected/cad6fd34-4fe6-42af-860e-d619e1ec6709-kube-api-access-s2nwt\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.279533 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.285702 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cad6fd34-4fe6-42af-860e-d619e1ec6709" (UID: "cad6fd34-4fe6-42af-860e-d619e1ec6709"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.381766 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad6fd34-4fe6-42af-860e-d619e1ec6709-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.621342 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qj5hh" event={"ID":"1bcbe4ce-733f-4352-9564-826b6113b4dd","Type":"ContainerStarted","Data":"41ff189566196b65f42dea204a2041e602fa90e743bf9f85de3f0b31d32811e1"} Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.621633 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qj5hh" event={"ID":"1bcbe4ce-733f-4352-9564-826b6113b4dd","Type":"ContainerStarted","Data":"9300b2ccc741af6898a5141a83978fc04e69255e47d5ee7bb1ebf980a1566fb4"} Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.626191 4900 generic.go:334] "Generic (PLEG): container finished" podID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerID="a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71" exitCode=0 Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.626231 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd5b5" event={"ID":"cad6fd34-4fe6-42af-860e-d619e1ec6709","Type":"ContainerDied","Data":"a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71"} Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.626255 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sd5b5" event={"ID":"cad6fd34-4fe6-42af-860e-d619e1ec6709","Type":"ContainerDied","Data":"6d378e692e040f20a204a0f76753b4781def4a86555efca692adef0b3684547e"} Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.626275 4900 scope.go:117] "RemoveContainer" containerID="a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.626438 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sd5b5" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.646999 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qj5hh" podStartSLOduration=1.646972183 podStartE2EDuration="1.646972183s" podCreationTimestamp="2025-12-02 15:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:13:24.636836738 +0000 UTC m=+5450.052650589" watchObservedRunningTime="2025-12-02 15:13:24.646972183 +0000 UTC m=+5450.062786064" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.650434 4900 scope.go:117] "RemoveContainer" containerID="454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.673686 4900 scope.go:117] "RemoveContainer" containerID="8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.678589 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sd5b5"] Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.684700 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sd5b5"] Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.713416 4900 scope.go:117] "RemoveContainer" containerID="a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71" Dec 02 15:13:24 crc kubenswrapper[4900]: E1202 15:13:24.713922 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71\": container with ID starting with a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71 not found: ID does not exist" containerID="a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.713954 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71"} err="failed to get container status \"a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71\": rpc error: code = NotFound desc = could not find container \"a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71\": container with ID starting with a268c690cc8ff2e4dac31b24061180a389028e52fa4860045ae80536b576ae71 not found: ID does not exist" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.713977 4900 scope.go:117] "RemoveContainer" containerID="454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f" Dec 02 15:13:24 crc kubenswrapper[4900]: E1202 15:13:24.714214 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f\": container with ID starting with 454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f not found: ID does not exist" containerID="454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.714239 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f"} err="failed to get container status \"454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f\": rpc error: code = NotFound desc = could not find container \"454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f\": container with ID starting with 454ed42ee6fb36da3b41f5c04d0a454baa913d8aa2c5bbc3fd0bced3453ec86f not found: ID does not exist" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.714256 4900 scope.go:117] "RemoveContainer" containerID="8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2" Dec 02 15:13:24 crc kubenswrapper[4900]: E1202 15:13:24.714487 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2\": container with ID starting with 8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2 not found: ID does not exist" containerID="8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.714531 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2"} err="failed to get container status \"8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2\": rpc error: code = NotFound desc = could not find container \"8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2\": container with ID starting with 8424cccb9578507f70d85735062f9998323710edd4f672ca8734fceea104b8f2 not found: ID does not exist" Dec 02 15:13:24 crc kubenswrapper[4900]: I1202 15:13:24.923865 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" path="/var/lib/kubelet/pods/cad6fd34-4fe6-42af-860e-d619e1ec6709/volumes" Dec 02 15:13:28 crc kubenswrapper[4900]: I1202 15:13:28.674932 4900 generic.go:334] "Generic (PLEG): container finished" podID="1bcbe4ce-733f-4352-9564-826b6113b4dd" containerID="41ff189566196b65f42dea204a2041e602fa90e743bf9f85de3f0b31d32811e1" exitCode=0 Dec 02 15:13:28 crc kubenswrapper[4900]: I1202 15:13:28.675059 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qj5hh" event={"ID":"1bcbe4ce-733f-4352-9564-826b6113b4dd","Type":"ContainerDied","Data":"41ff189566196b65f42dea204a2041e602fa90e743bf9f85de3f0b31d32811e1"} Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.045162 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.087472 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-config\") pod \"1bcbe4ce-733f-4352-9564-826b6113b4dd\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.087545 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt5pp\" (UniqueName: \"kubernetes.io/projected/1bcbe4ce-733f-4352-9564-826b6113b4dd-kube-api-access-jt5pp\") pod \"1bcbe4ce-733f-4352-9564-826b6113b4dd\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.087620 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-combined-ca-bundle\") pod \"1bcbe4ce-733f-4352-9564-826b6113b4dd\" (UID: \"1bcbe4ce-733f-4352-9564-826b6113b4dd\") " Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.096822 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcbe4ce-733f-4352-9564-826b6113b4dd-kube-api-access-jt5pp" (OuterVolumeSpecName: "kube-api-access-jt5pp") pod "1bcbe4ce-733f-4352-9564-826b6113b4dd" (UID: "1bcbe4ce-733f-4352-9564-826b6113b4dd"). InnerVolumeSpecName "kube-api-access-jt5pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.129373 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-config" (OuterVolumeSpecName: "config") pod "1bcbe4ce-733f-4352-9564-826b6113b4dd" (UID: "1bcbe4ce-733f-4352-9564-826b6113b4dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.129435 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bcbe4ce-733f-4352-9564-826b6113b4dd" (UID: "1bcbe4ce-733f-4352-9564-826b6113b4dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.189997 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.190157 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bcbe4ce-733f-4352-9564-826b6113b4dd-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.190222 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt5pp\" (UniqueName: \"kubernetes.io/projected/1bcbe4ce-733f-4352-9564-826b6113b4dd-kube-api-access-jt5pp\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.704632 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qj5hh" event={"ID":"1bcbe4ce-733f-4352-9564-826b6113b4dd","Type":"ContainerDied","Data":"9300b2ccc741af6898a5141a83978fc04e69255e47d5ee7bb1ebf980a1566fb4"} Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.704727 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9300b2ccc741af6898a5141a83978fc04e69255e47d5ee7bb1ebf980a1566fb4" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.704769 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qj5hh" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.952806 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b48cdcfd9-d68w2"] Dec 02 15:13:30 crc kubenswrapper[4900]: E1202 15:13:30.953347 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="extract-content" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.953372 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="extract-content" Dec 02 15:13:30 crc kubenswrapper[4900]: E1202 15:13:30.953385 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="registry-server" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.953395 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="registry-server" Dec 02 15:13:30 crc kubenswrapper[4900]: E1202 15:13:30.953409 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="extract-utilities" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.953419 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="extract-utilities" Dec 02 15:13:30 crc kubenswrapper[4900]: E1202 15:13:30.953457 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcbe4ce-733f-4352-9564-826b6113b4dd" containerName="neutron-db-sync" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.953465 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcbe4ce-733f-4352-9564-826b6113b4dd" containerName="neutron-db-sync" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.954175 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad6fd34-4fe6-42af-860e-d619e1ec6709" containerName="registry-server" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.954206 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcbe4ce-733f-4352-9564-826b6113b4dd" containerName="neutron-db-sync" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.955588 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:30 crc kubenswrapper[4900]: I1202 15:13:30.980881 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b48cdcfd9-d68w2"] Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.006606 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ch2p\" (UniqueName: \"kubernetes.io/projected/3aacdc40-241e-4194-9dff-4d60ea1de0a1-kube-api-access-5ch2p\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.006713 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-dns-svc\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.006801 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-nb\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.006833 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-config\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.006908 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-sb\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.037417 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64d9497c5c-zxhv8"] Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.038731 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.043723 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.044205 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.044385 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-shmlw" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.061406 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64d9497c5c-zxhv8"] Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112237 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-combined-ca-bundle\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112284 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-sb\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112329 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-httpd-config\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112419 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ch2p\" (UniqueName: \"kubernetes.io/projected/3aacdc40-241e-4194-9dff-4d60ea1de0a1-kube-api-access-5ch2p\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-config\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112471 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbq75\" (UniqueName: \"kubernetes.io/projected/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-kube-api-access-hbq75\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112513 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-dns-svc\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112623 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-nb\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.112673 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-config\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.113297 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-sb\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.113565 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-config\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.113980 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-dns-svc\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.114152 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-nb\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.149694 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ch2p\" (UniqueName: \"kubernetes.io/projected/3aacdc40-241e-4194-9dff-4d60ea1de0a1-kube-api-access-5ch2p\") pod \"dnsmasq-dns-b48cdcfd9-d68w2\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.214733 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-combined-ca-bundle\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.214799 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-httpd-config\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.214880 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-config\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.214920 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbq75\" (UniqueName: \"kubernetes.io/projected/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-kube-api-access-hbq75\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.218311 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-combined-ca-bundle\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.223277 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-httpd-config\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.227466 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-config\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.235892 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbq75\" (UniqueName: \"kubernetes.io/projected/7c81f731-76e5-4d22-ba31-c6fcaf3f699c-kube-api-access-hbq75\") pod \"neutron-64d9497c5c-zxhv8\" (UID: \"7c81f731-76e5-4d22-ba31-c6fcaf3f699c\") " pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.276258 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.363851 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:31 crc kubenswrapper[4900]: I1202 15:13:31.752276 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b48cdcfd9-d68w2"] Dec 02 15:13:32 crc kubenswrapper[4900]: I1202 15:13:32.523517 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64d9497c5c-zxhv8"] Dec 02 15:13:32 crc kubenswrapper[4900]: W1202 15:13:32.539228 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c81f731_76e5_4d22_ba31_c6fcaf3f699c.slice/crio-4c1dd4f1ef958c3a7907d938deb8380b799ff78011aff1fdd4edca529bc1e222 WatchSource:0}: Error finding container 4c1dd4f1ef958c3a7907d938deb8380b799ff78011aff1fdd4edca529bc1e222: Status 404 returned error can't find the container with id 4c1dd4f1ef958c3a7907d938deb8380b799ff78011aff1fdd4edca529bc1e222 Dec 02 15:13:32 crc kubenswrapper[4900]: I1202 15:13:32.749903 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d9497c5c-zxhv8" event={"ID":"7c81f731-76e5-4d22-ba31-c6fcaf3f699c","Type":"ContainerStarted","Data":"4c1dd4f1ef958c3a7907d938deb8380b799ff78011aff1fdd4edca529bc1e222"} Dec 02 15:13:32 crc kubenswrapper[4900]: I1202 15:13:32.753367 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" event={"ID":"3aacdc40-241e-4194-9dff-4d60ea1de0a1","Type":"ContainerStarted","Data":"5e22557a01ccccf670b90e7b356d57dbcf26c3b27904951c22728c5252677db6"} Dec 02 15:13:33 crc kubenswrapper[4900]: I1202 15:13:33.763073 4900 generic.go:334] "Generic (PLEG): container finished" podID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerID="d17c53732ddd2d07cc03572bef18424a8c1d95f16d99cdfa9ea37badd4791f82" exitCode=0 Dec 02 15:13:33 crc kubenswrapper[4900]: I1202 15:13:33.763179 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" event={"ID":"3aacdc40-241e-4194-9dff-4d60ea1de0a1","Type":"ContainerDied","Data":"d17c53732ddd2d07cc03572bef18424a8c1d95f16d99cdfa9ea37badd4791f82"} Dec 02 15:13:33 crc kubenswrapper[4900]: I1202 15:13:33.770224 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d9497c5c-zxhv8" event={"ID":"7c81f731-76e5-4d22-ba31-c6fcaf3f699c","Type":"ContainerStarted","Data":"11b31893111834fb3f323e1dc876ff517bdbaa42e9a12b042ce391c6e87e1452"} Dec 02 15:13:33 crc kubenswrapper[4900]: I1202 15:13:33.770272 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:13:33 crc kubenswrapper[4900]: I1202 15:13:33.770286 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d9497c5c-zxhv8" event={"ID":"7c81f731-76e5-4d22-ba31-c6fcaf3f699c","Type":"ContainerStarted","Data":"2f43da48387b39fde6da85628be7ac1647e34bba73009c74f908523e9ef7a580"} Dec 02 15:13:33 crc kubenswrapper[4900]: I1202 15:13:33.819386 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64d9497c5c-zxhv8" podStartSLOduration=2.819363076 podStartE2EDuration="2.819363076s" podCreationTimestamp="2025-12-02 15:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:13:33.819305665 +0000 UTC m=+5459.235119516" watchObservedRunningTime="2025-12-02 15:13:33.819363076 +0000 UTC m=+5459.235176937" Dec 02 15:13:34 crc kubenswrapper[4900]: I1202 15:13:34.780252 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" event={"ID":"3aacdc40-241e-4194-9dff-4d60ea1de0a1","Type":"ContainerStarted","Data":"b157d8ca253d5d730585208987eaf5b7eabef78ecdd0072f2d8558e9aaa43e77"} Dec 02 15:13:34 crc kubenswrapper[4900]: I1202 15:13:34.780590 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:34 crc kubenswrapper[4900]: I1202 15:13:34.798561 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" podStartSLOduration=4.7985400160000005 podStartE2EDuration="4.798540016s" podCreationTimestamp="2025-12-02 15:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:13:34.796197961 +0000 UTC m=+5460.212011842" watchObservedRunningTime="2025-12-02 15:13:34.798540016 +0000 UTC m=+5460.214353887" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.277825 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.349402 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4f7856c-8t6l5"] Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.349872 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerName="dnsmasq-dns" containerID="cri-o://982ebc35218e99919d6191c10cd75f441ed4a42ee5c7787b236bab706c1a00a9" gracePeriod=10 Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.861878 4900 generic.go:334] "Generic (PLEG): container finished" podID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerID="982ebc35218e99919d6191c10cd75f441ed4a42ee5c7787b236bab706c1a00a9" exitCode=0 Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.861942 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" event={"ID":"f682a100-9c64-45b0-97f1-e896fb0b9fe8","Type":"ContainerDied","Data":"982ebc35218e99919d6191c10cd75f441ed4a42ee5c7787b236bab706c1a00a9"} Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.862208 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" event={"ID":"f682a100-9c64-45b0-97f1-e896fb0b9fe8","Type":"ContainerDied","Data":"8d3dcf3bc7336716cef8f1c7e9eb2756cab33bd168e2afc4371aa3d7be5e2e9b"} Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.862226 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d3dcf3bc7336716cef8f1c7e9eb2756cab33bd168e2afc4371aa3d7be5e2e9b" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.909487 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.919989 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-config\") pod \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.920058 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-sb\") pod \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.920103 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-dns-svc\") pod \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.921320 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-nb\") pod \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.921367 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbbzb\" (UniqueName: \"kubernetes.io/projected/f682a100-9c64-45b0-97f1-e896fb0b9fe8-kube-api-access-hbbzb\") pod \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\" (UID: \"f682a100-9c64-45b0-97f1-e896fb0b9fe8\") " Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.926066 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f682a100-9c64-45b0-97f1-e896fb0b9fe8-kube-api-access-hbbzb" (OuterVolumeSpecName: "kube-api-access-hbbzb") pod "f682a100-9c64-45b0-97f1-e896fb0b9fe8" (UID: "f682a100-9c64-45b0-97f1-e896fb0b9fe8"). InnerVolumeSpecName "kube-api-access-hbbzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.990686 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f682a100-9c64-45b0-97f1-e896fb0b9fe8" (UID: "f682a100-9c64-45b0-97f1-e896fb0b9fe8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.993977 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f682a100-9c64-45b0-97f1-e896fb0b9fe8" (UID: "f682a100-9c64-45b0-97f1-e896fb0b9fe8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:41 crc kubenswrapper[4900]: I1202 15:13:41.998946 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f682a100-9c64-45b0-97f1-e896fb0b9fe8" (UID: "f682a100-9c64-45b0-97f1-e896fb0b9fe8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.002289 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-config" (OuterVolumeSpecName: "config") pod "f682a100-9c64-45b0-97f1-e896fb0b9fe8" (UID: "f682a100-9c64-45b0-97f1-e896fb0b9fe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.023315 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.023347 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.023358 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbbzb\" (UniqueName: \"kubernetes.io/projected/f682a100-9c64-45b0-97f1-e896fb0b9fe8-kube-api-access-hbbzb\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.023367 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.023376 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f682a100-9c64-45b0-97f1-e896fb0b9fe8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.871032 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4f7856c-8t6l5" Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.899060 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4f7856c-8t6l5"] Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.908078 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4f7856c-8t6l5"] Dec 02 15:13:42 crc kubenswrapper[4900]: I1202 15:13:42.920279 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" path="/var/lib/kubelet/pods/f682a100-9c64-45b0-97f1-e896fb0b9fe8/volumes" Dec 02 15:13:45 crc kubenswrapper[4900]: I1202 15:13:45.117053 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:13:45 crc kubenswrapper[4900]: I1202 15:13:45.117448 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:14:01 crc kubenswrapper[4900]: I1202 15:14:01.376525 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64d9497c5c-zxhv8" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.405141 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2k57m"] Dec 02 15:14:08 crc kubenswrapper[4900]: E1202 15:14:08.406030 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerName="dnsmasq-dns" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.406044 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerName="dnsmasq-dns" Dec 02 15:14:08 crc kubenswrapper[4900]: E1202 15:14:08.406075 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerName="init" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.406102 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerName="init" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.406302 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f682a100-9c64-45b0-97f1-e896fb0b9fe8" containerName="dnsmasq-dns" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.406980 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.415996 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2k57m"] Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.483774 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s477v\" (UniqueName: \"kubernetes.io/projected/ddd2e113-06ec-473e-b9d3-d064bdcf430e-kube-api-access-s477v\") pod \"glance-db-create-2k57m\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.483855 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd2e113-06ec-473e-b9d3-d064bdcf430e-operator-scripts\") pod \"glance-db-create-2k57m\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.505841 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d6db-account-create-update-7w9q6"] Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.507749 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.513060 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d6db-account-create-update-7w9q6"] Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.513077 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.586561 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd2e113-06ec-473e-b9d3-d064bdcf430e-operator-scripts\") pod \"glance-db-create-2k57m\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.586637 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4p2\" (UniqueName: \"kubernetes.io/projected/717f6540-b7e2-4a49-9de1-72291cf6e532-kube-api-access-6m4p2\") pod \"glance-d6db-account-create-update-7w9q6\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.586771 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717f6540-b7e2-4a49-9de1-72291cf6e532-operator-scripts\") pod \"glance-d6db-account-create-update-7w9q6\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.586795 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s477v\" (UniqueName: \"kubernetes.io/projected/ddd2e113-06ec-473e-b9d3-d064bdcf430e-kube-api-access-s477v\") pod \"glance-db-create-2k57m\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.587503 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd2e113-06ec-473e-b9d3-d064bdcf430e-operator-scripts\") pod \"glance-db-create-2k57m\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.607365 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s477v\" (UniqueName: \"kubernetes.io/projected/ddd2e113-06ec-473e-b9d3-d064bdcf430e-kube-api-access-s477v\") pod \"glance-db-create-2k57m\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.688914 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717f6540-b7e2-4a49-9de1-72291cf6e532-operator-scripts\") pod \"glance-d6db-account-create-update-7w9q6\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.689019 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4p2\" (UniqueName: \"kubernetes.io/projected/717f6540-b7e2-4a49-9de1-72291cf6e532-kube-api-access-6m4p2\") pod \"glance-d6db-account-create-update-7w9q6\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.690372 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717f6540-b7e2-4a49-9de1-72291cf6e532-operator-scripts\") pod \"glance-d6db-account-create-update-7w9q6\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.729075 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2k57m" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.743285 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4p2\" (UniqueName: \"kubernetes.io/projected/717f6540-b7e2-4a49-9de1-72291cf6e532-kube-api-access-6m4p2\") pod \"glance-d6db-account-create-update-7w9q6\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:08 crc kubenswrapper[4900]: I1202 15:14:08.834340 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:09 crc kubenswrapper[4900]: I1202 15:14:09.371788 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d6db-account-create-update-7w9q6"] Dec 02 15:14:09 crc kubenswrapper[4900]: I1202 15:14:09.436026 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2k57m"] Dec 02 15:14:10 crc kubenswrapper[4900]: I1202 15:14:10.205270 4900 generic.go:334] "Generic (PLEG): container finished" podID="ddd2e113-06ec-473e-b9d3-d064bdcf430e" containerID="7a5af63b513fa6dd091b7e151bfe5c79adeb61d4e3cec885e729cd187fd5a197" exitCode=0 Dec 02 15:14:10 crc kubenswrapper[4900]: I1202 15:14:10.205379 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2k57m" event={"ID":"ddd2e113-06ec-473e-b9d3-d064bdcf430e","Type":"ContainerDied","Data":"7a5af63b513fa6dd091b7e151bfe5c79adeb61d4e3cec885e729cd187fd5a197"} Dec 02 15:14:10 crc kubenswrapper[4900]: I1202 15:14:10.205613 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2k57m" event={"ID":"ddd2e113-06ec-473e-b9d3-d064bdcf430e","Type":"ContainerStarted","Data":"6cb2880fbb3553e1387e61b7684a094962797aee3342b7533915f44abd879830"} Dec 02 15:14:10 crc kubenswrapper[4900]: I1202 15:14:10.208308 4900 generic.go:334] "Generic (PLEG): container finished" podID="717f6540-b7e2-4a49-9de1-72291cf6e532" containerID="a49c768e61802a505f088beb662353870f987dd9c09ace4e018efe29cb99917d" exitCode=0 Dec 02 15:14:10 crc kubenswrapper[4900]: I1202 15:14:10.208388 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d6db-account-create-update-7w9q6" event={"ID":"717f6540-b7e2-4a49-9de1-72291cf6e532","Type":"ContainerDied","Data":"a49c768e61802a505f088beb662353870f987dd9c09ace4e018efe29cb99917d"} Dec 02 15:14:10 crc kubenswrapper[4900]: I1202 15:14:10.208698 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d6db-account-create-update-7w9q6" event={"ID":"717f6540-b7e2-4a49-9de1-72291cf6e532","Type":"ContainerStarted","Data":"b0a1d546f39271a76b8e1a2b7189972c1f6f18f3993f7c65b957933a188f30bc"} Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.641395 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.648450 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2k57m" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.843216 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717f6540-b7e2-4a49-9de1-72291cf6e532-operator-scripts\") pod \"717f6540-b7e2-4a49-9de1-72291cf6e532\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.843324 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd2e113-06ec-473e-b9d3-d064bdcf430e-operator-scripts\") pod \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.843469 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4p2\" (UniqueName: \"kubernetes.io/projected/717f6540-b7e2-4a49-9de1-72291cf6e532-kube-api-access-6m4p2\") pod \"717f6540-b7e2-4a49-9de1-72291cf6e532\" (UID: \"717f6540-b7e2-4a49-9de1-72291cf6e532\") " Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.843525 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s477v\" (UniqueName: \"kubernetes.io/projected/ddd2e113-06ec-473e-b9d3-d064bdcf430e-kube-api-access-s477v\") pod \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\" (UID: \"ddd2e113-06ec-473e-b9d3-d064bdcf430e\") " Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.844195 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd2e113-06ec-473e-b9d3-d064bdcf430e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddd2e113-06ec-473e-b9d3-d064bdcf430e" (UID: "ddd2e113-06ec-473e-b9d3-d064bdcf430e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.844346 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717f6540-b7e2-4a49-9de1-72291cf6e532-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "717f6540-b7e2-4a49-9de1-72291cf6e532" (UID: "717f6540-b7e2-4a49-9de1-72291cf6e532"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.852232 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717f6540-b7e2-4a49-9de1-72291cf6e532-kube-api-access-6m4p2" (OuterVolumeSpecName: "kube-api-access-6m4p2") pod "717f6540-b7e2-4a49-9de1-72291cf6e532" (UID: "717f6540-b7e2-4a49-9de1-72291cf6e532"). InnerVolumeSpecName "kube-api-access-6m4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.852421 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd2e113-06ec-473e-b9d3-d064bdcf430e-kube-api-access-s477v" (OuterVolumeSpecName: "kube-api-access-s477v") pod "ddd2e113-06ec-473e-b9d3-d064bdcf430e" (UID: "ddd2e113-06ec-473e-b9d3-d064bdcf430e"). InnerVolumeSpecName "kube-api-access-s477v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.945291 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m4p2\" (UniqueName: \"kubernetes.io/projected/717f6540-b7e2-4a49-9de1-72291cf6e532-kube-api-access-6m4p2\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.945817 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s477v\" (UniqueName: \"kubernetes.io/projected/ddd2e113-06ec-473e-b9d3-d064bdcf430e-kube-api-access-s477v\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.945927 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717f6540-b7e2-4a49-9de1-72291cf6e532-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:11 crc kubenswrapper[4900]: I1202 15:14:11.945997 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd2e113-06ec-473e-b9d3-d064bdcf430e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:12 crc kubenswrapper[4900]: I1202 15:14:12.227559 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2k57m" event={"ID":"ddd2e113-06ec-473e-b9d3-d064bdcf430e","Type":"ContainerDied","Data":"6cb2880fbb3553e1387e61b7684a094962797aee3342b7533915f44abd879830"} Dec 02 15:14:12 crc kubenswrapper[4900]: I1202 15:14:12.227885 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb2880fbb3553e1387e61b7684a094962797aee3342b7533915f44abd879830" Dec 02 15:14:12 crc kubenswrapper[4900]: I1202 15:14:12.227626 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2k57m" Dec 02 15:14:12 crc kubenswrapper[4900]: I1202 15:14:12.229587 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d6db-account-create-update-7w9q6" event={"ID":"717f6540-b7e2-4a49-9de1-72291cf6e532","Type":"ContainerDied","Data":"b0a1d546f39271a76b8e1a2b7189972c1f6f18f3993f7c65b957933a188f30bc"} Dec 02 15:14:12 crc kubenswrapper[4900]: I1202 15:14:12.229634 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a1d546f39271a76b8e1a2b7189972c1f6f18f3993f7c65b957933a188f30bc" Dec 02 15:14:12 crc kubenswrapper[4900]: I1202 15:14:12.229672 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d6db-account-create-update-7w9q6" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.703004 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-lmjzm"] Dec 02 15:14:13 crc kubenswrapper[4900]: E1202 15:14:13.703372 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717f6540-b7e2-4a49-9de1-72291cf6e532" containerName="mariadb-account-create-update" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.703388 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="717f6540-b7e2-4a49-9de1-72291cf6e532" containerName="mariadb-account-create-update" Dec 02 15:14:13 crc kubenswrapper[4900]: E1202 15:14:13.703431 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd2e113-06ec-473e-b9d3-d064bdcf430e" containerName="mariadb-database-create" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.703441 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd2e113-06ec-473e-b9d3-d064bdcf430e" containerName="mariadb-database-create" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.703885 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd2e113-06ec-473e-b9d3-d064bdcf430e" containerName="mariadb-database-create" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.703915 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="717f6540-b7e2-4a49-9de1-72291cf6e532" containerName="mariadb-account-create-update" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.704664 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.708132 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.708323 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dn9lx" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.724143 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lmjzm"] Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.781010 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-combined-ca-bundle\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.781072 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-db-sync-config-data\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.781116 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-config-data\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.781203 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvtvg\" (UniqueName: \"kubernetes.io/projected/b2828f33-f08a-4ef5-8995-f2a5b72de227-kube-api-access-lvtvg\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.882451 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-db-sync-config-data\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.882507 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-config-data\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.882586 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvtvg\" (UniqueName: \"kubernetes.io/projected/b2828f33-f08a-4ef5-8995-f2a5b72de227-kube-api-access-lvtvg\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.882664 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-combined-ca-bundle\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.886872 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-config-data\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.886951 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-db-sync-config-data\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.890060 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-combined-ca-bundle\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:13 crc kubenswrapper[4900]: I1202 15:14:13.923439 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvtvg\" (UniqueName: \"kubernetes.io/projected/b2828f33-f08a-4ef5-8995-f2a5b72de227-kube-api-access-lvtvg\") pod \"glance-db-sync-lmjzm\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:14 crc kubenswrapper[4900]: I1202 15:14:14.031519 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:14 crc kubenswrapper[4900]: I1202 15:14:14.418929 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lmjzm"] Dec 02 15:14:15 crc kubenswrapper[4900]: I1202 15:14:15.116122 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:14:15 crc kubenswrapper[4900]: I1202 15:14:15.116507 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:14:15 crc kubenswrapper[4900]: I1202 15:14:15.259973 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lmjzm" event={"ID":"b2828f33-f08a-4ef5-8995-f2a5b72de227","Type":"ContainerStarted","Data":"94827944a2d4e0fc86dcf15b4764fbaaeff20a8cb0f99fe4c7a197840a65415c"} Dec 02 15:14:15 crc kubenswrapper[4900]: I1202 15:14:15.260034 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lmjzm" event={"ID":"b2828f33-f08a-4ef5-8995-f2a5b72de227","Type":"ContainerStarted","Data":"685a20e39ffd2ab7cee6fc272a9e50438039901d3ed09723b505c382c6ffe5de"} Dec 02 15:14:15 crc kubenswrapper[4900]: I1202 15:14:15.287395 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-lmjzm" podStartSLOduration=2.287374701 podStartE2EDuration="2.287374701s" podCreationTimestamp="2025-12-02 15:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:15.274567542 +0000 UTC m=+5500.690381393" watchObservedRunningTime="2025-12-02 15:14:15.287374701 +0000 UTC m=+5500.703188572" Dec 02 15:14:18 crc kubenswrapper[4900]: I1202 15:14:18.295749 4900 generic.go:334] "Generic (PLEG): container finished" podID="b2828f33-f08a-4ef5-8995-f2a5b72de227" containerID="94827944a2d4e0fc86dcf15b4764fbaaeff20a8cb0f99fe4c7a197840a65415c" exitCode=0 Dec 02 15:14:18 crc kubenswrapper[4900]: I1202 15:14:18.295856 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lmjzm" event={"ID":"b2828f33-f08a-4ef5-8995-f2a5b72de227","Type":"ContainerDied","Data":"94827944a2d4e0fc86dcf15b4764fbaaeff20a8cb0f99fe4c7a197840a65415c"} Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.717847 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.885936 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-config-data\") pod \"b2828f33-f08a-4ef5-8995-f2a5b72de227\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.886018 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-db-sync-config-data\") pod \"b2828f33-f08a-4ef5-8995-f2a5b72de227\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.886097 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-combined-ca-bundle\") pod \"b2828f33-f08a-4ef5-8995-f2a5b72de227\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.886213 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvtvg\" (UniqueName: \"kubernetes.io/projected/b2828f33-f08a-4ef5-8995-f2a5b72de227-kube-api-access-lvtvg\") pod \"b2828f33-f08a-4ef5-8995-f2a5b72de227\" (UID: \"b2828f33-f08a-4ef5-8995-f2a5b72de227\") " Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.899899 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b2828f33-f08a-4ef5-8995-f2a5b72de227" (UID: "b2828f33-f08a-4ef5-8995-f2a5b72de227"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.906871 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2828f33-f08a-4ef5-8995-f2a5b72de227-kube-api-access-lvtvg" (OuterVolumeSpecName: "kube-api-access-lvtvg") pod "b2828f33-f08a-4ef5-8995-f2a5b72de227" (UID: "b2828f33-f08a-4ef5-8995-f2a5b72de227"). InnerVolumeSpecName "kube-api-access-lvtvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.937703 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2828f33-f08a-4ef5-8995-f2a5b72de227" (UID: "b2828f33-f08a-4ef5-8995-f2a5b72de227"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.973572 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-config-data" (OuterVolumeSpecName: "config-data") pod "b2828f33-f08a-4ef5-8995-f2a5b72de227" (UID: "b2828f33-f08a-4ef5-8995-f2a5b72de227"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.988839 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.988886 4900 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.988905 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2828f33-f08a-4ef5-8995-f2a5b72de227-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:19 crc kubenswrapper[4900]: I1202 15:14:19.988921 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvtvg\" (UniqueName: \"kubernetes.io/projected/b2828f33-f08a-4ef5-8995-f2a5b72de227-kube-api-access-lvtvg\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.320933 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lmjzm" event={"ID":"b2828f33-f08a-4ef5-8995-f2a5b72de227","Type":"ContainerDied","Data":"685a20e39ffd2ab7cee6fc272a9e50438039901d3ed09723b505c382c6ffe5de"} Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.320978 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685a20e39ffd2ab7cee6fc272a9e50438039901d3ed09723b505c382c6ffe5de" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.321477 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lmjzm" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.744015 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64484ff4c7-krmgz"] Dec 02 15:14:20 crc kubenswrapper[4900]: E1202 15:14:20.744382 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2828f33-f08a-4ef5-8995-f2a5b72de227" containerName="glance-db-sync" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.744394 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2828f33-f08a-4ef5-8995-f2a5b72de227" containerName="glance-db-sync" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.744564 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2828f33-f08a-4ef5-8995-f2a5b72de227" containerName="glance-db-sync" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.745385 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.808056 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64484ff4c7-krmgz"] Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.820066 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.821766 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.829722 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.829765 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.832448 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.833030 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dn9lx" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.838163 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.850383 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.851745 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.854203 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.879335 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911235 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911279 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-config\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911332 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-scripts\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911377 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8mq\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-kube-api-access-zp8mq\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911408 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-sb\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911480 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwdc\" (UniqueName: \"kubernetes.io/projected/9025b472-e2f6-4adb-a063-78c26a15a1ca-kube-api-access-bmwdc\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911575 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-config-data\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911672 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-dns-svc\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911692 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-nb\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911849 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911881 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-logs\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:20 crc kubenswrapper[4900]: I1202 15:14:20.911902 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-ceph\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013441 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013752 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013781 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013820 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-config-data\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013876 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-dns-svc\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013899 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-nb\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013934 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.013986 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014017 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014056 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-logs\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014078 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-ceph\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014116 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014140 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-config\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014177 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-scripts\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014213 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp8mq\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-kube-api-access-zp8mq\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014237 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-sb\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014292 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmd4\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-kube-api-access-djmd4\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014348 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwdc\" (UniqueName: \"kubernetes.io/projected/9025b472-e2f6-4adb-a063-78c26a15a1ca-kube-api-access-bmwdc\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.014370 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.015408 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-logs\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.016905 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-dns-svc\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.017447 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-nb\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.018031 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.018567 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-config\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.020270 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-sb\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.021209 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-scripts\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.021277 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-ceph\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.031530 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-config-data\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.032545 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp8mq\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-kube-api-access-zp8mq\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.032789 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.044075 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwdc\" (UniqueName: \"kubernetes.io/projected/9025b472-e2f6-4adb-a063-78c26a15a1ca-kube-api-access-bmwdc\") pod \"dnsmasq-dns-64484ff4c7-krmgz\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.084006 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116620 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116725 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116845 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmd4\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-kube-api-access-djmd4\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116871 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116897 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116926 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.116953 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.117525 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.119012 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.130109 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.133386 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.136287 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.138844 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.148100 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.172505 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmd4\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-kube-api-access-djmd4\") pod \"glance-default-internal-api-0\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.469147 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.807506 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64484ff4c7-krmgz"] Dec 02 15:14:21 crc kubenswrapper[4900]: W1202 15:14:21.819537 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9025b472_e2f6_4adb_a063_78c26a15a1ca.slice/crio-91bb68dffc9e287a26b4dc4fe360d90c58181114c3716139260affbff54326f7 WatchSource:0}: Error finding container 91bb68dffc9e287a26b4dc4fe360d90c58181114c3716139260affbff54326f7: Status 404 returned error can't find the container with id 91bb68dffc9e287a26b4dc4fe360d90c58181114c3716139260affbff54326f7 Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.915326 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:21 crc kubenswrapper[4900]: W1202 15:14:21.915322 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81cc8ab1_a91c_4d5a_b9ba_a4b6c3eda0de.slice/crio-5f528cf5933e6614c160027648c55e29d005422001cfbf89748e343799d9901d WatchSource:0}: Error finding container 5f528cf5933e6614c160027648c55e29d005422001cfbf89748e343799d9901d: Status 404 returned error can't find the container with id 5f528cf5933e6614c160027648c55e29d005422001cfbf89748e343799d9901d Dec 02 15:14:21 crc kubenswrapper[4900]: I1202 15:14:21.991743 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:22 crc kubenswrapper[4900]: I1202 15:14:22.089468 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:22 crc kubenswrapper[4900]: W1202 15:14:22.120089 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbcc99e7_b7bd_4912_b198_92fbf94c16c1.slice/crio-c5dc4658f2daff7be3aab82c3d77b7fb791fa5ac7444699c4aea167d4c6824bc WatchSource:0}: Error finding container c5dc4658f2daff7be3aab82c3d77b7fb791fa5ac7444699c4aea167d4c6824bc: Status 404 returned error can't find the container with id c5dc4658f2daff7be3aab82c3d77b7fb791fa5ac7444699c4aea167d4c6824bc Dec 02 15:14:22 crc kubenswrapper[4900]: I1202 15:14:22.363337 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbcc99e7-b7bd-4912-b198-92fbf94c16c1","Type":"ContainerStarted","Data":"c5dc4658f2daff7be3aab82c3d77b7fb791fa5ac7444699c4aea167d4c6824bc"} Dec 02 15:14:22 crc kubenswrapper[4900]: I1202 15:14:22.365709 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de","Type":"ContainerStarted","Data":"5f528cf5933e6614c160027648c55e29d005422001cfbf89748e343799d9901d"} Dec 02 15:14:22 crc kubenswrapper[4900]: I1202 15:14:22.368718 4900 generic.go:334] "Generic (PLEG): container finished" podID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerID="b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253" exitCode=0 Dec 02 15:14:22 crc kubenswrapper[4900]: I1202 15:14:22.368769 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" event={"ID":"9025b472-e2f6-4adb-a063-78c26a15a1ca","Type":"ContainerDied","Data":"b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253"} Dec 02 15:14:22 crc kubenswrapper[4900]: I1202 15:14:22.368801 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" event={"ID":"9025b472-e2f6-4adb-a063-78c26a15a1ca","Type":"ContainerStarted","Data":"91bb68dffc9e287a26b4dc4fe360d90c58181114c3716139260affbff54326f7"} Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.379387 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" event={"ID":"9025b472-e2f6-4adb-a063-78c26a15a1ca","Type":"ContainerStarted","Data":"30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5"} Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.380082 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.382762 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbcc99e7-b7bd-4912-b198-92fbf94c16c1","Type":"ContainerStarted","Data":"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52"} Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.385072 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de","Type":"ContainerStarted","Data":"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7"} Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.385104 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de","Type":"ContainerStarted","Data":"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138"} Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.385229 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-log" containerID="cri-o://28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138" gracePeriod=30 Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.385260 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-httpd" containerID="cri-o://23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7" gracePeriod=30 Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.406233 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" podStartSLOduration=3.406216507 podStartE2EDuration="3.406216507s" podCreationTimestamp="2025-12-02 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:23.395461886 +0000 UTC m=+5508.811275747" watchObservedRunningTime="2025-12-02 15:14:23.406216507 +0000 UTC m=+5508.822030358" Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.431036 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.431014433 podStartE2EDuration="3.431014433s" podCreationTimestamp="2025-12-02 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:23.425496739 +0000 UTC m=+5508.841310600" watchObservedRunningTime="2025-12-02 15:14:23.431014433 +0000 UTC m=+5508.846828284" Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.913376 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:23 crc kubenswrapper[4900]: I1202 15:14:23.931334 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.079761 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-combined-ca-bundle\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.079842 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-httpd-run\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.079883 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp8mq\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-kube-api-access-zp8mq\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.079921 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-logs\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.079953 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-ceph\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.079969 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-config-data\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.080033 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-scripts\") pod \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\" (UID: \"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de\") " Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.080449 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.080704 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-logs" (OuterVolumeSpecName: "logs") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.101767 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-ceph" (OuterVolumeSpecName: "ceph") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.103978 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-scripts" (OuterVolumeSpecName: "scripts") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.104795 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-kube-api-access-zp8mq" (OuterVolumeSpecName: "kube-api-access-zp8mq") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "kube-api-access-zp8mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.135314 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.139807 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-config-data" (OuterVolumeSpecName: "config-data") pod "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" (UID: "81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.188661 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.188958 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.188970 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp8mq\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-kube-api-access-zp8mq\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.188983 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.189005 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.189014 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.189023 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.399760 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbcc99e7-b7bd-4912-b198-92fbf94c16c1","Type":"ContainerStarted","Data":"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae"} Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.412180 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.412333 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de","Type":"ContainerDied","Data":"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7"} Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.412397 4900 scope.go:117] "RemoveContainer" containerID="23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.419911 4900 generic.go:334] "Generic (PLEG): container finished" podID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerID="23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7" exitCode=143 Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.419976 4900 generic.go:334] "Generic (PLEG): container finished" podID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerID="28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138" exitCode=143 Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.420068 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de","Type":"ContainerDied","Data":"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138"} Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.420145 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de","Type":"ContainerDied","Data":"5f528cf5933e6614c160027648c55e29d005422001cfbf89748e343799d9901d"} Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.438775 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.438757105 podStartE2EDuration="4.438757105s" podCreationTimestamp="2025-12-02 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:24.438179489 +0000 UTC m=+5509.853993340" watchObservedRunningTime="2025-12-02 15:14:24.438757105 +0000 UTC m=+5509.854570956" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.452103 4900 scope.go:117] "RemoveContainer" containerID="28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.457382 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.475199 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.486830 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:24 crc kubenswrapper[4900]: E1202 15:14:24.487306 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-log" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.487325 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-log" Dec 02 15:14:24 crc kubenswrapper[4900]: E1202 15:14:24.487360 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-httpd" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.487368 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-httpd" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.487579 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-log" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.487599 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" containerName="glance-httpd" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.490322 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.496577 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.500875 4900 scope.go:117] "RemoveContainer" containerID="23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7" Dec 02 15:14:24 crc kubenswrapper[4900]: E1202 15:14:24.503438 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7\": container with ID starting with 23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7 not found: ID does not exist" containerID="23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.503472 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7"} err="failed to get container status \"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7\": rpc error: code = NotFound desc = could not find container \"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7\": container with ID starting with 23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7 not found: ID does not exist" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.503496 4900 scope.go:117] "RemoveContainer" containerID="28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138" Dec 02 15:14:24 crc kubenswrapper[4900]: E1202 15:14:24.503812 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138\": container with ID starting with 28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138 not found: ID does not exist" containerID="28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.503837 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138"} err="failed to get container status \"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138\": rpc error: code = NotFound desc = could not find container \"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138\": container with ID starting with 28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138 not found: ID does not exist" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.503854 4900 scope.go:117] "RemoveContainer" containerID="23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.505311 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7"} err="failed to get container status \"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7\": rpc error: code = NotFound desc = could not find container \"23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7\": container with ID starting with 23c7ef7b1b2706aef7bdbb381e767121c0d0491803af9c2ee6582680d479ade7 not found: ID does not exist" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.505340 4900 scope.go:117] "RemoveContainer" containerID="28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.505795 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138"} err="failed to get container status \"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138\": rpc error: code = NotFound desc = could not find container \"28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138\": container with ID starting with 28473832b22717dc41c59a3064c3a0f3a2d6ebd3a4fb1abec13c41b8dc7e0138 not found: ID does not exist" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.506445 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.600610 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-ceph\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.600688 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.600717 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-logs\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.600735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.600786 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-config-data\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.600963 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-scripts\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.601241 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892xs\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-kube-api-access-892xs\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702719 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-892xs\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-kube-api-access-892xs\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702795 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-ceph\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702839 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702867 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-logs\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702888 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702924 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-config-data\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.702959 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-scripts\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.703542 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-logs\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.703813 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.706718 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.709184 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-ceph\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.709995 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-scripts\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.710787 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-config-data\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.728133 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-892xs\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-kube-api-access-892xs\") pod \"glance-default-external-api-0\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.810614 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:14:24 crc kubenswrapper[4900]: I1202 15:14:24.921546 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de" path="/var/lib/kubelet/pods/81cc8ab1-a91c-4d5a-b9ba-a4b6c3eda0de/volumes" Dec 02 15:14:25 crc kubenswrapper[4900]: I1202 15:14:25.326958 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:14:25 crc kubenswrapper[4900]: I1202 15:14:25.441088 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6199fc67-306a-4267-9303-4673b0145e06","Type":"ContainerStarted","Data":"cb39de123371bf578dab6e787cb88de8c5ee30d8ffd5f46b3f3a15d7ca85a356"} Dec 02 15:14:25 crc kubenswrapper[4900]: I1202 15:14:25.443799 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-log" containerID="cri-o://11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52" gracePeriod=30 Dec 02 15:14:25 crc kubenswrapper[4900]: I1202 15:14:25.444177 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-httpd" containerID="cri-o://2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae" gracePeriod=30 Dec 02 15:14:25 crc kubenswrapper[4900]: I1202 15:14:25.944980 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.135003 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-ceph\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.135408 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-combined-ca-bundle\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.135485 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-scripts\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.135508 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-httpd-run\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.135990 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-config-data\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.136064 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djmd4\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-kube-api-access-djmd4\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.136096 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-logs\") pod \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\" (UID: \"fbcc99e7-b7bd-4912-b198-92fbf94c16c1\") " Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.136692 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-logs" (OuterVolumeSpecName: "logs") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.137107 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.138908 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-scripts" (OuterVolumeSpecName: "scripts") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.139699 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-kube-api-access-djmd4" (OuterVolumeSpecName: "kube-api-access-djmd4") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "kube-api-access-djmd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.140313 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-ceph" (OuterVolumeSpecName: "ceph") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.163027 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.177859 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-config-data" (OuterVolumeSpecName: "config-data") pod "fbcc99e7-b7bd-4912-b198-92fbf94c16c1" (UID: "fbcc99e7-b7bd-4912-b198-92fbf94c16c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237876 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237903 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237912 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237921 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djmd4\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-kube-api-access-djmd4\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237932 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237942 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.237951 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcc99e7-b7bd-4912-b198-92fbf94c16c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.458179 4900 generic.go:334] "Generic (PLEG): container finished" podID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerID="2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae" exitCode=0 Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.458219 4900 generic.go:334] "Generic (PLEG): container finished" podID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerID="11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52" exitCode=143 Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.458217 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbcc99e7-b7bd-4912-b198-92fbf94c16c1","Type":"ContainerDied","Data":"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae"} Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.458269 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbcc99e7-b7bd-4912-b198-92fbf94c16c1","Type":"ContainerDied","Data":"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52"} Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.458283 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fbcc99e7-b7bd-4912-b198-92fbf94c16c1","Type":"ContainerDied","Data":"c5dc4658f2daff7be3aab82c3d77b7fb791fa5ac7444699c4aea167d4c6824bc"} Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.458301 4900 scope.go:117] "RemoveContainer" containerID="2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.459328 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.461102 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6199fc67-306a-4267-9303-4673b0145e06","Type":"ContainerStarted","Data":"b7021b75705609d151ef2585d725f07a66a29c49f4974eb71b48073ed5a2945c"} Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.477998 4900 scope.go:117] "RemoveContainer" containerID="11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.522156 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.530041 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.563359 4900 scope.go:117] "RemoveContainer" containerID="2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.564356 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:26 crc kubenswrapper[4900]: E1202 15:14:26.564743 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-log" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.564762 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-log" Dec 02 15:14:26 crc kubenswrapper[4900]: E1202 15:14:26.564798 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-httpd" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.564807 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-httpd" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.565017 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-log" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.565045 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" containerName="glance-httpd" Dec 02 15:14:26 crc kubenswrapper[4900]: E1202 15:14:26.566778 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae\": container with ID starting with 2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae not found: ID does not exist" containerID="2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.566820 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae"} err="failed to get container status \"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae\": rpc error: code = NotFound desc = could not find container \"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae\": container with ID starting with 2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae not found: ID does not exist" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.566846 4900 scope.go:117] "RemoveContainer" containerID="11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52" Dec 02 15:14:26 crc kubenswrapper[4900]: E1202 15:14:26.567313 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52\": container with ID starting with 11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52 not found: ID does not exist" containerID="11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.567338 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52"} err="failed to get container status \"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52\": rpc error: code = NotFound desc = could not find container \"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52\": container with ID starting with 11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52 not found: ID does not exist" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.567354 4900 scope.go:117] "RemoveContainer" containerID="2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.567614 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae"} err="failed to get container status \"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae\": rpc error: code = NotFound desc = could not find container \"2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae\": container with ID starting with 2d8376a6cb560b7a5ff2f89623890fc2f46c5d21606dc60e079505195988d4ae not found: ID does not exist" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.567633 4900 scope.go:117] "RemoveContainer" containerID="11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.567783 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52"} err="failed to get container status \"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52\": rpc error: code = NotFound desc = could not find container \"11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52\": container with ID starting with 11ce9dbf1e8376edaff6995a6d8dfc860381ce352f35be9e106be4a63c0c2a52 not found: ID does not exist" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.568486 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.570717 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.595708 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747183 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747634 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747769 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747804 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747848 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mpv7\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-kube-api-access-5mpv7\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747893 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.747964 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849554 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849677 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849770 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849800 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849843 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mpv7\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-kube-api-access-5mpv7\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849882 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.849923 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.850973 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.851543 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.855056 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.855273 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.856435 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.857554 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.873872 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mpv7\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-kube-api-access-5mpv7\") pod \"glance-default-internal-api-0\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.894808 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:26 crc kubenswrapper[4900]: I1202 15:14:26.921255 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbcc99e7-b7bd-4912-b198-92fbf94c16c1" path="/var/lib/kubelet/pods/fbcc99e7-b7bd-4912-b198-92fbf94c16c1/volumes" Dec 02 15:14:27 crc kubenswrapper[4900]: I1202 15:14:27.390879 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:14:27 crc kubenswrapper[4900]: I1202 15:14:27.477936 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f","Type":"ContainerStarted","Data":"e7268a0b9cf1af478276a5a2b8d1194a357cf06d9b29c741bbdaac4d6ee02242"} Dec 02 15:14:27 crc kubenswrapper[4900]: I1202 15:14:27.488488 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6199fc67-306a-4267-9303-4673b0145e06","Type":"ContainerStarted","Data":"22a90fc032e37a735a851e66367f1ce0f4604bcd4540471ce99135f8d8c59a59"} Dec 02 15:14:27 crc kubenswrapper[4900]: I1202 15:14:27.531973 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.531951442 podStartE2EDuration="3.531951442s" podCreationTimestamp="2025-12-02 15:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:27.51869839 +0000 UTC m=+5512.934512251" watchObservedRunningTime="2025-12-02 15:14:27.531951442 +0000 UTC m=+5512.947765293" Dec 02 15:14:28 crc kubenswrapper[4900]: I1202 15:14:28.511785 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f","Type":"ContainerStarted","Data":"083e8c28c4970f3bdbd84e5e14659b349eb7c0a6e5f0316c1ff280bfeacf9e8f"} Dec 02 15:14:28 crc kubenswrapper[4900]: I1202 15:14:28.541830 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.541805743 podStartE2EDuration="2.541805743s" podCreationTimestamp="2025-12-02 15:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:28.538334075 +0000 UTC m=+5513.954147976" watchObservedRunningTime="2025-12-02 15:14:28.541805743 +0000 UTC m=+5513.957619604" Dec 02 15:14:29 crc kubenswrapper[4900]: I1202 15:14:29.542467 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f","Type":"ContainerStarted","Data":"182a284ab3b298b51861a8fa2aca2b7033022e8be81e6bf8097586e9df332175"} Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.086844 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.176118 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b48cdcfd9-d68w2"] Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.176465 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="dnsmasq-dns" containerID="cri-o://b157d8ca253d5d730585208987eaf5b7eabef78ecdd0072f2d8558e9aaa43e77" gracePeriod=10 Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.277435 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.34:5353: connect: connection refused" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.561573 4900 generic.go:334] "Generic (PLEG): container finished" podID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerID="b157d8ca253d5d730585208987eaf5b7eabef78ecdd0072f2d8558e9aaa43e77" exitCode=0 Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.561612 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" event={"ID":"3aacdc40-241e-4194-9dff-4d60ea1de0a1","Type":"ContainerDied","Data":"b157d8ca253d5d730585208987eaf5b7eabef78ecdd0072f2d8558e9aaa43e77"} Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.647393 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.738069 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-sb\") pod \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.738149 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ch2p\" (UniqueName: \"kubernetes.io/projected/3aacdc40-241e-4194-9dff-4d60ea1de0a1-kube-api-access-5ch2p\") pod \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.738246 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-nb\") pod \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.738296 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-config\") pod \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.738359 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-dns-svc\") pod \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\" (UID: \"3aacdc40-241e-4194-9dff-4d60ea1de0a1\") " Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.770994 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aacdc40-241e-4194-9dff-4d60ea1de0a1-kube-api-access-5ch2p" (OuterVolumeSpecName: "kube-api-access-5ch2p") pod "3aacdc40-241e-4194-9dff-4d60ea1de0a1" (UID: "3aacdc40-241e-4194-9dff-4d60ea1de0a1"). InnerVolumeSpecName "kube-api-access-5ch2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.840073 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ch2p\" (UniqueName: \"kubernetes.io/projected/3aacdc40-241e-4194-9dff-4d60ea1de0a1-kube-api-access-5ch2p\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.843643 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3aacdc40-241e-4194-9dff-4d60ea1de0a1" (UID: "3aacdc40-241e-4194-9dff-4d60ea1de0a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.851050 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-config" (OuterVolumeSpecName: "config") pod "3aacdc40-241e-4194-9dff-4d60ea1de0a1" (UID: "3aacdc40-241e-4194-9dff-4d60ea1de0a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.851243 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3aacdc40-241e-4194-9dff-4d60ea1de0a1" (UID: "3aacdc40-241e-4194-9dff-4d60ea1de0a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.860159 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3aacdc40-241e-4194-9dff-4d60ea1de0a1" (UID: "3aacdc40-241e-4194-9dff-4d60ea1de0a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.941821 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.941884 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.941901 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:31 crc kubenswrapper[4900]: I1202 15:14:31.941914 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aacdc40-241e-4194-9dff-4d60ea1de0a1-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.572078 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" event={"ID":"3aacdc40-241e-4194-9dff-4d60ea1de0a1","Type":"ContainerDied","Data":"5e22557a01ccccf670b90e7b356d57dbcf26c3b27904951c22728c5252677db6"} Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.572135 4900 scope.go:117] "RemoveContainer" containerID="b157d8ca253d5d730585208987eaf5b7eabef78ecdd0072f2d8558e9aaa43e77" Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.572179 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b48cdcfd9-d68w2" Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.601772 4900 scope.go:117] "RemoveContainer" containerID="d17c53732ddd2d07cc03572bef18424a8c1d95f16d99cdfa9ea37badd4791f82" Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.635490 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b48cdcfd9-d68w2"] Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.641471 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b48cdcfd9-d68w2"] Dec 02 15:14:32 crc kubenswrapper[4900]: I1202 15:14:32.923904 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" path="/var/lib/kubelet/pods/3aacdc40-241e-4194-9dff-4d60ea1de0a1/volumes" Dec 02 15:14:34 crc kubenswrapper[4900]: I1202 15:14:34.811476 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 15:14:34 crc kubenswrapper[4900]: I1202 15:14:34.812800 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 15:14:34 crc kubenswrapper[4900]: I1202 15:14:34.853584 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 15:14:34 crc kubenswrapper[4900]: I1202 15:14:34.855356 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 15:14:35 crc kubenswrapper[4900]: I1202 15:14:35.604130 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 15:14:35 crc kubenswrapper[4900]: I1202 15:14:35.605094 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 15:14:36 crc kubenswrapper[4900]: I1202 15:14:36.895482 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:36 crc kubenswrapper[4900]: I1202 15:14:36.895586 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:36 crc kubenswrapper[4900]: I1202 15:14:36.932837 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:36 crc kubenswrapper[4900]: I1202 15:14:36.944438 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:37 crc kubenswrapper[4900]: I1202 15:14:37.592440 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 15:14:37 crc kubenswrapper[4900]: I1202 15:14:37.599034 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 15:14:37 crc kubenswrapper[4900]: I1202 15:14:37.631902 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:37 crc kubenswrapper[4900]: I1202 15:14:37.631959 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:39 crc kubenswrapper[4900]: I1202 15:14:39.511748 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:39 crc kubenswrapper[4900]: I1202 15:14:39.514155 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.125461 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.126129 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.126203 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.128852 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b76a522fc29ab4b883e8d52d8ae8d1cc61b9e17f09e1711cca595a73a978fea"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.128938 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://3b76a522fc29ab4b883e8d52d8ae8d1cc61b9e17f09e1711cca595a73a978fea" gracePeriod=600 Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.725825 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="3b76a522fc29ab4b883e8d52d8ae8d1cc61b9e17f09e1711cca595a73a978fea" exitCode=0 Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.725873 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"3b76a522fc29ab4b883e8d52d8ae8d1cc61b9e17f09e1711cca595a73a978fea"} Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.726226 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e"} Dec 02 15:14:45 crc kubenswrapper[4900]: I1202 15:14:45.726259 4900 scope.go:117] "RemoveContainer" containerID="725aced945935e04e66dc3aa75f806bb54e61f188f76b13f79be654e50e923c7" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.302614 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c95jb"] Dec 02 15:14:47 crc kubenswrapper[4900]: E1202 15:14:47.303571 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="init" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.303596 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="init" Dec 02 15:14:47 crc kubenswrapper[4900]: E1202 15:14:47.303625 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="dnsmasq-dns" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.303638 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="dnsmasq-dns" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.304035 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aacdc40-241e-4194-9dff-4d60ea1de0a1" containerName="dnsmasq-dns" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.305091 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.320920 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c95jb"] Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.400851 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85a6-account-create-update-h527x"] Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.402268 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.404744 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.422222 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85a6-account-create-update-h527x"] Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.449545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-operator-scripts\") pod \"placement-db-create-c95jb\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.449721 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvkw\" (UniqueName: \"kubernetes.io/projected/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-kube-api-access-ltvkw\") pod \"placement-db-create-c95jb\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.551070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvkw\" (UniqueName: \"kubernetes.io/projected/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-kube-api-access-ltvkw\") pod \"placement-db-create-c95jb\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.551148 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrdbc\" (UniqueName: \"kubernetes.io/projected/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-kube-api-access-hrdbc\") pod \"placement-85a6-account-create-update-h527x\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.551213 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-operator-scripts\") pod \"placement-85a6-account-create-update-h527x\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.551236 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-operator-scripts\") pod \"placement-db-create-c95jb\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.552103 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-operator-scripts\") pod \"placement-db-create-c95jb\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.572801 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvkw\" (UniqueName: \"kubernetes.io/projected/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-kube-api-access-ltvkw\") pod \"placement-db-create-c95jb\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.624151 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c95jb" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.653741 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrdbc\" (UniqueName: \"kubernetes.io/projected/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-kube-api-access-hrdbc\") pod \"placement-85a6-account-create-update-h527x\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.653865 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-operator-scripts\") pod \"placement-85a6-account-create-update-h527x\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.655213 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-operator-scripts\") pod \"placement-85a6-account-create-update-h527x\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.673810 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrdbc\" (UniqueName: \"kubernetes.io/projected/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-kube-api-access-hrdbc\") pod \"placement-85a6-account-create-update-h527x\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:47 crc kubenswrapper[4900]: I1202 15:14:47.721551 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.078638 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c95jb"] Dec 02 15:14:48 crc kubenswrapper[4900]: W1202 15:14:48.083744 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65818e2d_d982_4fe9_b4c7_8a4e77f7ee42.slice/crio-01cbdfb4a7a208e4e1e55c0747fb14e93151f6d77bb3af9c78735fb84854f29e WatchSource:0}: Error finding container 01cbdfb4a7a208e4e1e55c0747fb14e93151f6d77bb3af9c78735fb84854f29e: Status 404 returned error can't find the container with id 01cbdfb4a7a208e4e1e55c0747fb14e93151f6d77bb3af9c78735fb84854f29e Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.191487 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85a6-account-create-update-h527x"] Dec 02 15:14:48 crc kubenswrapper[4900]: W1202 15:14:48.195471 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod737c1bfd_4cb5_41a8_aa81_fce3a6f84f12.slice/crio-3b157d7d28c347c34ec6f684a8923a940f471d3996da4c1066cd13ea3cd94909 WatchSource:0}: Error finding container 3b157d7d28c347c34ec6f684a8923a940f471d3996da4c1066cd13ea3cd94909: Status 404 returned error can't find the container with id 3b157d7d28c347c34ec6f684a8923a940f471d3996da4c1066cd13ea3cd94909 Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.759234 4900 generic.go:334] "Generic (PLEG): container finished" podID="737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" containerID="9e1bec05f01bc4938aee8f24c9b2dd2a4204fb250992230c6aefd1ee7c25b194" exitCode=0 Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.759276 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85a6-account-create-update-h527x" event={"ID":"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12","Type":"ContainerDied","Data":"9e1bec05f01bc4938aee8f24c9b2dd2a4204fb250992230c6aefd1ee7c25b194"} Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.759689 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85a6-account-create-update-h527x" event={"ID":"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12","Type":"ContainerStarted","Data":"3b157d7d28c347c34ec6f684a8923a940f471d3996da4c1066cd13ea3cd94909"} Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.762009 4900 generic.go:334] "Generic (PLEG): container finished" podID="65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" containerID="a73fa48de9dcb5e9d9560fa3680ecc43ef00e9b40f21297638e08d06c32d6511" exitCode=0 Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.762047 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c95jb" event={"ID":"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42","Type":"ContainerDied","Data":"a73fa48de9dcb5e9d9560fa3680ecc43ef00e9b40f21297638e08d06c32d6511"} Dec 02 15:14:48 crc kubenswrapper[4900]: I1202 15:14:48.762095 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c95jb" event={"ID":"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42","Type":"ContainerStarted","Data":"01cbdfb4a7a208e4e1e55c0747fb14e93151f6d77bb3af9c78735fb84854f29e"} Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.230719 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.238526 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c95jb" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.305497 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-operator-scripts\") pod \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.305542 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrdbc\" (UniqueName: \"kubernetes.io/projected/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-kube-api-access-hrdbc\") pod \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.305697 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-operator-scripts\") pod \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\" (UID: \"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12\") " Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.305746 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltvkw\" (UniqueName: \"kubernetes.io/projected/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-kube-api-access-ltvkw\") pod \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\" (UID: \"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42\") " Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.306737 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" (UID: "737c1bfd-4cb5-41a8-aa81-fce3a6f84f12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.308178 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" (UID: "65818e2d-d982-4fe9-b4c7-8a4e77f7ee42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.314206 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-kube-api-access-ltvkw" (OuterVolumeSpecName: "kube-api-access-ltvkw") pod "65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" (UID: "65818e2d-d982-4fe9-b4c7-8a4e77f7ee42"). InnerVolumeSpecName "kube-api-access-ltvkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.314704 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-kube-api-access-hrdbc" (OuterVolumeSpecName: "kube-api-access-hrdbc") pod "737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" (UID: "737c1bfd-4cb5-41a8-aa81-fce3a6f84f12"). InnerVolumeSpecName "kube-api-access-hrdbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.407399 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.407450 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltvkw\" (UniqueName: \"kubernetes.io/projected/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-kube-api-access-ltvkw\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.407467 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.407478 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrdbc\" (UniqueName: \"kubernetes.io/projected/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12-kube-api-access-hrdbc\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.787524 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85a6-account-create-update-h527x" event={"ID":"737c1bfd-4cb5-41a8-aa81-fce3a6f84f12","Type":"ContainerDied","Data":"3b157d7d28c347c34ec6f684a8923a940f471d3996da4c1066cd13ea3cd94909"} Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.787581 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b157d7d28c347c34ec6f684a8923a940f471d3996da4c1066cd13ea3cd94909" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.787699 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85a6-account-create-update-h527x" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.790475 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c95jb" event={"ID":"65818e2d-d982-4fe9-b4c7-8a4e77f7ee42","Type":"ContainerDied","Data":"01cbdfb4a7a208e4e1e55c0747fb14e93151f6d77bb3af9c78735fb84854f29e"} Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.790523 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c95jb" Dec 02 15:14:50 crc kubenswrapper[4900]: I1202 15:14:50.790536 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01cbdfb4a7a208e4e1e55c0747fb14e93151f6d77bb3af9c78735fb84854f29e" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.659150 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb88cb577-4k7jc"] Dec 02 15:14:52 crc kubenswrapper[4900]: E1202 15:14:52.659858 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" containerName="mariadb-account-create-update" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.659898 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" containerName="mariadb-account-create-update" Dec 02 15:14:52 crc kubenswrapper[4900]: E1202 15:14:52.659941 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" containerName="mariadb-database-create" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.659949 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" containerName="mariadb-database-create" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.660165 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" containerName="mariadb-account-create-update" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.660181 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" containerName="mariadb-database-create" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.661675 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.667700 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6h5mj"] Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.669073 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.670829 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.671213 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9jqcx" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.672491 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.678457 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb88cb577-4k7jc"] Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.690678 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6h5mj"] Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757428 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-dns-svc\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757488 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-config\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757544 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-config-data\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757584 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-combined-ca-bundle\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757709 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7754303c-9e8d-4345-bc03-92010f30c25b-logs\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757745 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757779 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-scripts\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757814 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757865 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgm6\" (UniqueName: \"kubernetes.io/projected/b898aeaf-d484-479f-bc33-3d692adddfeb-kube-api-access-pkgm6\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.757931 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z42sq\" (UniqueName: \"kubernetes.io/projected/7754303c-9e8d-4345-bc03-92010f30c25b-kube-api-access-z42sq\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859037 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z42sq\" (UniqueName: \"kubernetes.io/projected/7754303c-9e8d-4345-bc03-92010f30c25b-kube-api-access-z42sq\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859111 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-dns-svc\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859131 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-config\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859164 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-config-data\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859186 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-combined-ca-bundle\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859211 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7754303c-9e8d-4345-bc03-92010f30c25b-logs\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859231 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859250 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-scripts\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859272 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.859306 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgm6\" (UniqueName: \"kubernetes.io/projected/b898aeaf-d484-479f-bc33-3d692adddfeb-kube-api-access-pkgm6\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.860006 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7754303c-9e8d-4345-bc03-92010f30c25b-logs\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.860171 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-config\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.860227 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.860740 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-dns-svc\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.861134 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.864444 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-config-data\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.866159 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-combined-ca-bundle\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.872062 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-scripts\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.883587 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z42sq\" (UniqueName: \"kubernetes.io/projected/7754303c-9e8d-4345-bc03-92010f30c25b-kube-api-access-z42sq\") pod \"placement-db-sync-6h5mj\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.889015 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgm6\" (UniqueName: \"kubernetes.io/projected/b898aeaf-d484-479f-bc33-3d692adddfeb-kube-api-access-pkgm6\") pod \"dnsmasq-dns-6cb88cb577-4k7jc\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.988424 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:52 crc kubenswrapper[4900]: I1202 15:14:52.996531 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.340713 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6h5mj"] Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.445168 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb88cb577-4k7jc"] Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.815370 4900 generic.go:334] "Generic (PLEG): container finished" podID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerID="2af0e5dbefe67794b5dca649b1753ebb9dd2e43972e13e0d46193f0090b66bd7" exitCode=0 Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.815783 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" event={"ID":"b898aeaf-d484-479f-bc33-3d692adddfeb","Type":"ContainerDied","Data":"2af0e5dbefe67794b5dca649b1753ebb9dd2e43972e13e0d46193f0090b66bd7"} Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.815835 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" event={"ID":"b898aeaf-d484-479f-bc33-3d692adddfeb","Type":"ContainerStarted","Data":"306f1f558eb3eec2e85710c562b58ee5d102aa8ee043f0785aff461f72c72710"} Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.817938 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6h5mj" event={"ID":"7754303c-9e8d-4345-bc03-92010f30c25b","Type":"ContainerStarted","Data":"45bef62447ad90fa0db77a7ca66581344708e0aaac6c07b9901e935c50ec3c67"} Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.817969 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6h5mj" event={"ID":"7754303c-9e8d-4345-bc03-92010f30c25b","Type":"ContainerStarted","Data":"6e1c7953c59eebeeb4194566382edb093d8bf4cbb71199b45d02a7bf9b7bca4a"} Dec 02 15:14:53 crc kubenswrapper[4900]: I1202 15:14:53.854836 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6h5mj" podStartSLOduration=1.854812554 podStartE2EDuration="1.854812554s" podCreationTimestamp="2025-12-02 15:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:53.851614385 +0000 UTC m=+5539.267428246" watchObservedRunningTime="2025-12-02 15:14:53.854812554 +0000 UTC m=+5539.270626415" Dec 02 15:14:54 crc kubenswrapper[4900]: I1202 15:14:54.833612 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" event={"ID":"b898aeaf-d484-479f-bc33-3d692adddfeb","Type":"ContainerStarted","Data":"5413cd290408b0f577d82ecf1fd06b4d03efbac3b009f50226dae79039809983"} Dec 02 15:14:54 crc kubenswrapper[4900]: I1202 15:14:54.835309 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:14:54 crc kubenswrapper[4900]: I1202 15:14:54.838931 4900 generic.go:334] "Generic (PLEG): container finished" podID="7754303c-9e8d-4345-bc03-92010f30c25b" containerID="45bef62447ad90fa0db77a7ca66581344708e0aaac6c07b9901e935c50ec3c67" exitCode=0 Dec 02 15:14:54 crc kubenswrapper[4900]: I1202 15:14:54.838970 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6h5mj" event={"ID":"7754303c-9e8d-4345-bc03-92010f30c25b","Type":"ContainerDied","Data":"45bef62447ad90fa0db77a7ca66581344708e0aaac6c07b9901e935c50ec3c67"} Dec 02 15:14:54 crc kubenswrapper[4900]: I1202 15:14:54.872669 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" podStartSLOduration=2.872625697 podStartE2EDuration="2.872625697s" podCreationTimestamp="2025-12-02 15:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:54.864913861 +0000 UTC m=+5540.280727762" watchObservedRunningTime="2025-12-02 15:14:54.872625697 +0000 UTC m=+5540.288439558" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.236558 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.359500 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-scripts\") pod \"7754303c-9e8d-4345-bc03-92010f30c25b\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.359768 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-combined-ca-bundle\") pod \"7754303c-9e8d-4345-bc03-92010f30c25b\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.359902 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7754303c-9e8d-4345-bc03-92010f30c25b-logs\") pod \"7754303c-9e8d-4345-bc03-92010f30c25b\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.360011 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z42sq\" (UniqueName: \"kubernetes.io/projected/7754303c-9e8d-4345-bc03-92010f30c25b-kube-api-access-z42sq\") pod \"7754303c-9e8d-4345-bc03-92010f30c25b\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.360095 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-config-data\") pod \"7754303c-9e8d-4345-bc03-92010f30c25b\" (UID: \"7754303c-9e8d-4345-bc03-92010f30c25b\") " Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.360576 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7754303c-9e8d-4345-bc03-92010f30c25b-logs" (OuterVolumeSpecName: "logs") pod "7754303c-9e8d-4345-bc03-92010f30c25b" (UID: "7754303c-9e8d-4345-bc03-92010f30c25b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.360866 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7754303c-9e8d-4345-bc03-92010f30c25b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.368264 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7754303c-9e8d-4345-bc03-92010f30c25b-kube-api-access-z42sq" (OuterVolumeSpecName: "kube-api-access-z42sq") pod "7754303c-9e8d-4345-bc03-92010f30c25b" (UID: "7754303c-9e8d-4345-bc03-92010f30c25b"). InnerVolumeSpecName "kube-api-access-z42sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.372834 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-scripts" (OuterVolumeSpecName: "scripts") pod "7754303c-9e8d-4345-bc03-92010f30c25b" (UID: "7754303c-9e8d-4345-bc03-92010f30c25b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.405345 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-config-data" (OuterVolumeSpecName: "config-data") pod "7754303c-9e8d-4345-bc03-92010f30c25b" (UID: "7754303c-9e8d-4345-bc03-92010f30c25b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.412287 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7754303c-9e8d-4345-bc03-92010f30c25b" (UID: "7754303c-9e8d-4345-bc03-92010f30c25b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.462059 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z42sq\" (UniqueName: \"kubernetes.io/projected/7754303c-9e8d-4345-bc03-92010f30c25b-kube-api-access-z42sq\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.462099 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.462112 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.462125 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7754303c-9e8d-4345-bc03-92010f30c25b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.868413 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6h5mj" event={"ID":"7754303c-9e8d-4345-bc03-92010f30c25b","Type":"ContainerDied","Data":"6e1c7953c59eebeeb4194566382edb093d8bf4cbb71199b45d02a7bf9b7bca4a"} Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.868545 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1c7953c59eebeeb4194566382edb093d8bf4cbb71199b45d02a7bf9b7bca4a" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.868736 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6h5mj" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.995023 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cdf87577b-9k5x9"] Dec 02 15:14:56 crc kubenswrapper[4900]: E1202 15:14:56.998873 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7754303c-9e8d-4345-bc03-92010f30c25b" containerName="placement-db-sync" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.998907 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7754303c-9e8d-4345-bc03-92010f30c25b" containerName="placement-db-sync" Dec 02 15:14:56 crc kubenswrapper[4900]: I1202 15:14:56.999206 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7754303c-9e8d-4345-bc03-92010f30c25b" containerName="placement-db-sync" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.000362 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.014097 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9jqcx" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.014198 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.014313 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.020618 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cdf87577b-9k5x9"] Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.188118 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn97p\" (UniqueName: \"kubernetes.io/projected/425acb33-bee4-4ad1-8c19-301cda4281de-kube-api-access-bn97p\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.188525 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-combined-ca-bundle\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.188547 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-scripts\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.188596 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-config-data\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.188620 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425acb33-bee4-4ad1-8c19-301cda4281de-logs\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.290703 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn97p\" (UniqueName: \"kubernetes.io/projected/425acb33-bee4-4ad1-8c19-301cda4281de-kube-api-access-bn97p\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.290967 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-combined-ca-bundle\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.291029 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-scripts\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.291107 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-config-data\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.291187 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425acb33-bee4-4ad1-8c19-301cda4281de-logs\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.292080 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425acb33-bee4-4ad1-8c19-301cda4281de-logs\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.298664 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-combined-ca-bundle\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.298966 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-scripts\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.301351 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425acb33-bee4-4ad1-8c19-301cda4281de-config-data\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.317420 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn97p\" (UniqueName: \"kubernetes.io/projected/425acb33-bee4-4ad1-8c19-301cda4281de-kube-api-access-bn97p\") pod \"placement-6cdf87577b-9k5x9\" (UID: \"425acb33-bee4-4ad1-8c19-301cda4281de\") " pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.342349 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.637252 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cdf87577b-9k5x9"] Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.880356 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdf87577b-9k5x9" event={"ID":"425acb33-bee4-4ad1-8c19-301cda4281de","Type":"ContainerStarted","Data":"1fa67f3095ea816feca38d3684b10fcd85b9bc5da823e374e41ae56574ac47b1"} Dec 02 15:14:57 crc kubenswrapper[4900]: I1202 15:14:57.880700 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdf87577b-9k5x9" event={"ID":"425acb33-bee4-4ad1-8c19-301cda4281de","Type":"ContainerStarted","Data":"ab4f3ee2d40fee798b3203f792639478380478f559df0d0bcc9baeca59c68a90"} Dec 02 15:14:58 crc kubenswrapper[4900]: I1202 15:14:58.892820 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdf87577b-9k5x9" event={"ID":"425acb33-bee4-4ad1-8c19-301cda4281de","Type":"ContainerStarted","Data":"79dce195ef46839a1d063b472d0eb7289bfb72ca0961370658f93c7820291a10"} Dec 02 15:14:58 crc kubenswrapper[4900]: I1202 15:14:58.893246 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:14:59 crc kubenswrapper[4900]: I1202 15:14:59.905892 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.139933 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cdf87577b-9k5x9" podStartSLOduration=4.139913248 podStartE2EDuration="4.139913248s" podCreationTimestamp="2025-12-02 15:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:14:58.927068131 +0000 UTC m=+5544.342881992" watchObservedRunningTime="2025-12-02 15:15:00.139913248 +0000 UTC m=+5545.555727109" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.154187 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb"] Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.203398 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.207910 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.207993 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.233267 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb"] Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.245408 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4xc\" (UniqueName: \"kubernetes.io/projected/25ce3395-7fb2-44c6-a046-01596df95ec7-kube-api-access-rm4xc\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.245487 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25ce3395-7fb2-44c6-a046-01596df95ec7-config-volume\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.245536 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25ce3395-7fb2-44c6-a046-01596df95ec7-secret-volume\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.348313 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4xc\" (UniqueName: \"kubernetes.io/projected/25ce3395-7fb2-44c6-a046-01596df95ec7-kube-api-access-rm4xc\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.348876 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25ce3395-7fb2-44c6-a046-01596df95ec7-config-volume\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.349229 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25ce3395-7fb2-44c6-a046-01596df95ec7-secret-volume\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.354251 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25ce3395-7fb2-44c6-a046-01596df95ec7-config-volume\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.357995 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25ce3395-7fb2-44c6-a046-01596df95ec7-secret-volume\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.381399 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4xc\" (UniqueName: \"kubernetes.io/projected/25ce3395-7fb2-44c6-a046-01596df95ec7-kube-api-access-rm4xc\") pod \"collect-profiles-29411475-927wb\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:00 crc kubenswrapper[4900]: I1202 15:15:00.543766 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:01 crc kubenswrapper[4900]: I1202 15:15:01.006869 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb"] Dec 02 15:15:01 crc kubenswrapper[4900]: I1202 15:15:01.932328 4900 generic.go:334] "Generic (PLEG): container finished" podID="25ce3395-7fb2-44c6-a046-01596df95ec7" containerID="9edb6d4ef15b81802da9cb4bc1599eaaf40d8980b0786ae62d36f74515be677b" exitCode=0 Dec 02 15:15:01 crc kubenswrapper[4900]: I1202 15:15:01.932377 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" event={"ID":"25ce3395-7fb2-44c6-a046-01596df95ec7","Type":"ContainerDied","Data":"9edb6d4ef15b81802da9cb4bc1599eaaf40d8980b0786ae62d36f74515be677b"} Dec 02 15:15:01 crc kubenswrapper[4900]: I1202 15:15:01.932431 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" event={"ID":"25ce3395-7fb2-44c6-a046-01596df95ec7","Type":"ContainerStarted","Data":"6acd2b0377fd11622f396b69465dc423fab29360a731cb34dbde90a04083d7b5"} Dec 02 15:15:02 crc kubenswrapper[4900]: I1202 15:15:02.991116 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.083621 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64484ff4c7-krmgz"] Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.084150 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerName="dnsmasq-dns" containerID="cri-o://30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5" gracePeriod=10 Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.371052 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.410108 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25ce3395-7fb2-44c6-a046-01596df95ec7-config-volume\") pod \"25ce3395-7fb2-44c6-a046-01596df95ec7\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.410227 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25ce3395-7fb2-44c6-a046-01596df95ec7-secret-volume\") pod \"25ce3395-7fb2-44c6-a046-01596df95ec7\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.410288 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm4xc\" (UniqueName: \"kubernetes.io/projected/25ce3395-7fb2-44c6-a046-01596df95ec7-kube-api-access-rm4xc\") pod \"25ce3395-7fb2-44c6-a046-01596df95ec7\" (UID: \"25ce3395-7fb2-44c6-a046-01596df95ec7\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.411355 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ce3395-7fb2-44c6-a046-01596df95ec7-config-volume" (OuterVolumeSpecName: "config-volume") pod "25ce3395-7fb2-44c6-a046-01596df95ec7" (UID: "25ce3395-7fb2-44c6-a046-01596df95ec7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.434847 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce3395-7fb2-44c6-a046-01596df95ec7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25ce3395-7fb2-44c6-a046-01596df95ec7" (UID: "25ce3395-7fb2-44c6-a046-01596df95ec7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.436236 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ce3395-7fb2-44c6-a046-01596df95ec7-kube-api-access-rm4xc" (OuterVolumeSpecName: "kube-api-access-rm4xc") pod "25ce3395-7fb2-44c6-a046-01596df95ec7" (UID: "25ce3395-7fb2-44c6-a046-01596df95ec7"). InnerVolumeSpecName "kube-api-access-rm4xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.511808 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25ce3395-7fb2-44c6-a046-01596df95ec7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.511847 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25ce3395-7fb2-44c6-a046-01596df95ec7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.511857 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm4xc\" (UniqueName: \"kubernetes.io/projected/25ce3395-7fb2-44c6-a046-01596df95ec7-kube-api-access-rm4xc\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.523003 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.613441 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmwdc\" (UniqueName: \"kubernetes.io/projected/9025b472-e2f6-4adb-a063-78c26a15a1ca-kube-api-access-bmwdc\") pod \"9025b472-e2f6-4adb-a063-78c26a15a1ca\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.613891 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-nb\") pod \"9025b472-e2f6-4adb-a063-78c26a15a1ca\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.613971 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-dns-svc\") pod \"9025b472-e2f6-4adb-a063-78c26a15a1ca\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.613999 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-config\") pod \"9025b472-e2f6-4adb-a063-78c26a15a1ca\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.614051 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-sb\") pod \"9025b472-e2f6-4adb-a063-78c26a15a1ca\" (UID: \"9025b472-e2f6-4adb-a063-78c26a15a1ca\") " Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.617320 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9025b472-e2f6-4adb-a063-78c26a15a1ca-kube-api-access-bmwdc" (OuterVolumeSpecName: "kube-api-access-bmwdc") pod "9025b472-e2f6-4adb-a063-78c26a15a1ca" (UID: "9025b472-e2f6-4adb-a063-78c26a15a1ca"). InnerVolumeSpecName "kube-api-access-bmwdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.653528 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9025b472-e2f6-4adb-a063-78c26a15a1ca" (UID: "9025b472-e2f6-4adb-a063-78c26a15a1ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.661219 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9025b472-e2f6-4adb-a063-78c26a15a1ca" (UID: "9025b472-e2f6-4adb-a063-78c26a15a1ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.667127 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9025b472-e2f6-4adb-a063-78c26a15a1ca" (UID: "9025b472-e2f6-4adb-a063-78c26a15a1ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.667276 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-config" (OuterVolumeSpecName: "config") pod "9025b472-e2f6-4adb-a063-78c26a15a1ca" (UID: "9025b472-e2f6-4adb-a063-78c26a15a1ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.716434 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.716463 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.716473 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.716482 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9025b472-e2f6-4adb-a063-78c26a15a1ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.716493 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmwdc\" (UniqueName: \"kubernetes.io/projected/9025b472-e2f6-4adb-a063-78c26a15a1ca-kube-api-access-bmwdc\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.947437 4900 generic.go:334] "Generic (PLEG): container finished" podID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerID="30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5" exitCode=0 Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.947510 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" event={"ID":"9025b472-e2f6-4adb-a063-78c26a15a1ca","Type":"ContainerDied","Data":"30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5"} Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.947533 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.947559 4900 scope.go:117] "RemoveContainer" containerID="30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.947546 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64484ff4c7-krmgz" event={"ID":"9025b472-e2f6-4adb-a063-78c26a15a1ca","Type":"ContainerDied","Data":"91bb68dffc9e287a26b4dc4fe360d90c58181114c3716139260affbff54326f7"} Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.949467 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" event={"ID":"25ce3395-7fb2-44c6-a046-01596df95ec7","Type":"ContainerDied","Data":"6acd2b0377fd11622f396b69465dc423fab29360a731cb34dbde90a04083d7b5"} Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.949504 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6acd2b0377fd11622f396b69465dc423fab29360a731cb34dbde90a04083d7b5" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.949512 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.965041 4900 scope.go:117] "RemoveContainer" containerID="b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.986806 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64484ff4c7-krmgz"] Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.987879 4900 scope.go:117] "RemoveContainer" containerID="30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5" Dec 02 15:15:03 crc kubenswrapper[4900]: E1202 15:15:03.988327 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5\": container with ID starting with 30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5 not found: ID does not exist" containerID="30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.988357 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5"} err="failed to get container status \"30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5\": rpc error: code = NotFound desc = could not find container \"30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5\": container with ID starting with 30b6b066db9e4bc90bd55ef8613d409e731b25870d0bb3dc077e26e07d2042e5 not found: ID does not exist" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.988380 4900 scope.go:117] "RemoveContainer" containerID="b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253" Dec 02 15:15:03 crc kubenswrapper[4900]: E1202 15:15:03.988582 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253\": container with ID starting with b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253 not found: ID does not exist" containerID="b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.988610 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253"} err="failed to get container status \"b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253\": rpc error: code = NotFound desc = could not find container \"b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253\": container with ID starting with b61ae6544a56c04ba636242e30c587077676d7f96496d36675e778e55ef17253 not found: ID does not exist" Dec 02 15:15:03 crc kubenswrapper[4900]: I1202 15:15:03.993518 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64484ff4c7-krmgz"] Dec 02 15:15:04 crc kubenswrapper[4900]: I1202 15:15:04.460470 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7"] Dec 02 15:15:04 crc kubenswrapper[4900]: I1202 15:15:04.468767 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-gq9q7"] Dec 02 15:15:04 crc kubenswrapper[4900]: I1202 15:15:04.922066 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" path="/var/lib/kubelet/pods/9025b472-e2f6-4adb-a063-78c26a15a1ca/volumes" Dec 02 15:15:04 crc kubenswrapper[4900]: I1202 15:15:04.923855 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ea0b54-fc53-41be-914a-699e24d18400" path="/var/lib/kubelet/pods/a3ea0b54-fc53-41be-914a-699e24d18400/volumes" Dec 02 15:15:28 crc kubenswrapper[4900]: I1202 15:15:28.454536 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:15:28 crc kubenswrapper[4900]: I1202 15:15:28.455090 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cdf87577b-9k5x9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.419608 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-276c9"] Dec 02 15:15:50 crc kubenswrapper[4900]: E1202 15:15:50.420532 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerName="init" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.420546 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerName="init" Dec 02 15:15:50 crc kubenswrapper[4900]: E1202 15:15:50.420562 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerName="dnsmasq-dns" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.420569 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerName="dnsmasq-dns" Dec 02 15:15:50 crc kubenswrapper[4900]: E1202 15:15:50.420579 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ce3395-7fb2-44c6-a046-01596df95ec7" containerName="collect-profiles" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.420585 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ce3395-7fb2-44c6-a046-01596df95ec7" containerName="collect-profiles" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.420781 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9025b472-e2f6-4adb-a063-78c26a15a1ca" containerName="dnsmasq-dns" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.420802 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ce3395-7fb2-44c6-a046-01596df95ec7" containerName="collect-profiles" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.421392 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.434044 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-276c9"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.576727 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b289267-ed9d-42c8-8aa7-ff762dc944b2-operator-scripts\") pod \"nova-api-db-create-276c9\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.576975 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk59r\" (UniqueName: \"kubernetes.io/projected/6b289267-ed9d-42c8-8aa7-ff762dc944b2-kube-api-access-fk59r\") pod \"nova-api-db-create-276c9\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.619218 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7r5wn"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.620259 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.631145 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7r5wn"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.640443 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1530-account-create-update-jmbtq"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.641527 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.643686 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.662164 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1530-account-create-update-jmbtq"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.678561 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b289267-ed9d-42c8-8aa7-ff762dc944b2-operator-scripts\") pod \"nova-api-db-create-276c9\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.678666 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk59r\" (UniqueName: \"kubernetes.io/projected/6b289267-ed9d-42c8-8aa7-ff762dc944b2-kube-api-access-fk59r\") pod \"nova-api-db-create-276c9\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.679376 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b289267-ed9d-42c8-8aa7-ff762dc944b2-operator-scripts\") pod \"nova-api-db-create-276c9\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.719807 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk59r\" (UniqueName: \"kubernetes.io/projected/6b289267-ed9d-42c8-8aa7-ff762dc944b2-kube-api-access-fk59r\") pod \"nova-api-db-create-276c9\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.758703 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mfpnk"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.759707 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.767567 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mfpnk"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.780456 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brq9l\" (UniqueName: \"kubernetes.io/projected/e05fdeb9-4372-4614-8d2e-437a6467dea2-kube-api-access-brq9l\") pod \"nova-cell0-db-create-7r5wn\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.780505 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5713f-20df-4ec0-b8c2-857404cd476e-operator-scripts\") pod \"nova-api-1530-account-create-update-jmbtq\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.780559 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05fdeb9-4372-4614-8d2e-437a6467dea2-operator-scripts\") pod \"nova-cell0-db-create-7r5wn\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.780805 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.780973 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z7v4\" (UniqueName: \"kubernetes.io/projected/7da5713f-20df-4ec0-b8c2-857404cd476e-kube-api-access-4z7v4\") pod \"nova-api-1530-account-create-update-jmbtq\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.832101 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-12e0-account-create-update-5cz74"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.833245 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.836573 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.840797 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-12e0-account-create-update-5cz74"] Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.882270 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brq9l\" (UniqueName: \"kubernetes.io/projected/e05fdeb9-4372-4614-8d2e-437a6467dea2-kube-api-access-brq9l\") pod \"nova-cell0-db-create-7r5wn\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.882433 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5713f-20df-4ec0-b8c2-857404cd476e-operator-scripts\") pod \"nova-api-1530-account-create-update-jmbtq\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.882503 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05fdeb9-4372-4614-8d2e-437a6467dea2-operator-scripts\") pod \"nova-cell0-db-create-7r5wn\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.882547 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4zw\" (UniqueName: \"kubernetes.io/projected/256bfb3b-0b62-4c85-b687-8f84d248c1a4-kube-api-access-gv4zw\") pod \"nova-cell1-db-create-mfpnk\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.882574 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z7v4\" (UniqueName: \"kubernetes.io/projected/7da5713f-20df-4ec0-b8c2-857404cd476e-kube-api-access-4z7v4\") pod \"nova-api-1530-account-create-update-jmbtq\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.882594 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/256bfb3b-0b62-4c85-b687-8f84d248c1a4-operator-scripts\") pod \"nova-cell1-db-create-mfpnk\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.883141 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5713f-20df-4ec0-b8c2-857404cd476e-operator-scripts\") pod \"nova-api-1530-account-create-update-jmbtq\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.883284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05fdeb9-4372-4614-8d2e-437a6467dea2-operator-scripts\") pod \"nova-cell0-db-create-7r5wn\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.898980 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z7v4\" (UniqueName: \"kubernetes.io/projected/7da5713f-20df-4ec0-b8c2-857404cd476e-kube-api-access-4z7v4\") pod \"nova-api-1530-account-create-update-jmbtq\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.916254 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brq9l\" (UniqueName: \"kubernetes.io/projected/e05fdeb9-4372-4614-8d2e-437a6467dea2-kube-api-access-brq9l\") pod \"nova-cell0-db-create-7r5wn\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.933967 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.956866 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.984726 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/256bfb3b-0b62-4c85-b687-8f84d248c1a4-operator-scripts\") pod \"nova-cell1-db-create-mfpnk\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.984833 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlrx\" (UniqueName: \"kubernetes.io/projected/6a6db4ab-d069-45ed-b01c-37508a8e2a78-kube-api-access-7rlrx\") pod \"nova-cell0-12e0-account-create-update-5cz74\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.985116 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv4zw\" (UniqueName: \"kubernetes.io/projected/256bfb3b-0b62-4c85-b687-8f84d248c1a4-kube-api-access-gv4zw\") pod \"nova-cell1-db-create-mfpnk\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.985166 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6db4ab-d069-45ed-b01c-37508a8e2a78-operator-scripts\") pod \"nova-cell0-12e0-account-create-update-5cz74\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:50 crc kubenswrapper[4900]: I1202 15:15:50.987661 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/256bfb3b-0b62-4c85-b687-8f84d248c1a4-operator-scripts\") pod \"nova-cell1-db-create-mfpnk\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.017074 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv4zw\" (UniqueName: \"kubernetes.io/projected/256bfb3b-0b62-4c85-b687-8f84d248c1a4-kube-api-access-gv4zw\") pod \"nova-cell1-db-create-mfpnk\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.038845 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-35fd-account-create-update-kbfxd"] Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.039877 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.042310 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.053555 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-35fd-account-create-update-kbfxd"] Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.081051 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.086510 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlrx\" (UniqueName: \"kubernetes.io/projected/6a6db4ab-d069-45ed-b01c-37508a8e2a78-kube-api-access-7rlrx\") pod \"nova-cell0-12e0-account-create-update-5cz74\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.086629 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6db4ab-d069-45ed-b01c-37508a8e2a78-operator-scripts\") pod \"nova-cell0-12e0-account-create-update-5cz74\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.087301 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6db4ab-d069-45ed-b01c-37508a8e2a78-operator-scripts\") pod \"nova-cell0-12e0-account-create-update-5cz74\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.103903 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlrx\" (UniqueName: \"kubernetes.io/projected/6a6db4ab-d069-45ed-b01c-37508a8e2a78-kube-api-access-7rlrx\") pod \"nova-cell0-12e0-account-create-update-5cz74\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.183993 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.188288 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcfm\" (UniqueName: \"kubernetes.io/projected/32ed1538-6965-4e3b-900c-f1d66ae74adb-kube-api-access-fgcfm\") pod \"nova-cell1-35fd-account-create-update-kbfxd\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.188397 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed1538-6965-4e3b-900c-f1d66ae74adb-operator-scripts\") pod \"nova-cell1-35fd-account-create-update-kbfxd\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.290073 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed1538-6965-4e3b-900c-f1d66ae74adb-operator-scripts\") pod \"nova-cell1-35fd-account-create-update-kbfxd\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.290168 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcfm\" (UniqueName: \"kubernetes.io/projected/32ed1538-6965-4e3b-900c-f1d66ae74adb-kube-api-access-fgcfm\") pod \"nova-cell1-35fd-account-create-update-kbfxd\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.290875 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed1538-6965-4e3b-900c-f1d66ae74adb-operator-scripts\") pod \"nova-cell1-35fd-account-create-update-kbfxd\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.308133 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-276c9"] Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.312411 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcfm\" (UniqueName: \"kubernetes.io/projected/32ed1538-6965-4e3b-900c-f1d66ae74adb-kube-api-access-fgcfm\") pod \"nova-cell1-35fd-account-create-update-kbfxd\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.357726 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.430856 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7r5wn"] Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.478614 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7r5wn" event={"ID":"e05fdeb9-4372-4614-8d2e-437a6467dea2","Type":"ContainerStarted","Data":"a01d58b58e8c198d8633b402c6fe3af974b28f9096d288ca57c46f626de576de"} Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.480292 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-276c9" event={"ID":"6b289267-ed9d-42c8-8aa7-ff762dc944b2","Type":"ContainerStarted","Data":"7157dac4453d344bad005fbd245849969464cbae3bdfedc22e0f2c0d1c53587a"} Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.498465 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1530-account-create-update-jmbtq"] Dec 02 15:15:51 crc kubenswrapper[4900]: W1202 15:15:51.511312 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7da5713f_20df_4ec0_b8c2_857404cd476e.slice/crio-c7ba0003061de3e2adc0f699cd6e39ac538463e79b3dc122767ef8ecf965d400 WatchSource:0}: Error finding container c7ba0003061de3e2adc0f699cd6e39ac538463e79b3dc122767ef8ecf965d400: Status 404 returned error can't find the container with id c7ba0003061de3e2adc0f699cd6e39ac538463e79b3dc122767ef8ecf965d400 Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.588018 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mfpnk"] Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.648822 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-12e0-account-create-update-5cz74"] Dec 02 15:15:51 crc kubenswrapper[4900]: W1202 15:15:51.660832 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a6db4ab_d069_45ed_b01c_37508a8e2a78.slice/crio-192540bb44d745859c7b3efe19288c7f7f3f948235b18b63a689eccc537edadd WatchSource:0}: Error finding container 192540bb44d745859c7b3efe19288c7f7f3f948235b18b63a689eccc537edadd: Status 404 returned error can't find the container with id 192540bb44d745859c7b3efe19288c7f7f3f948235b18b63a689eccc537edadd Dec 02 15:15:51 crc kubenswrapper[4900]: I1202 15:15:51.860149 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-35fd-account-create-update-kbfxd"] Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.501040 4900 generic.go:334] "Generic (PLEG): container finished" podID="256bfb3b-0b62-4c85-b687-8f84d248c1a4" containerID="ed195d3cabbdb44f8b4be3d6d5021e17696247d3ac44322c75cfae752fe5f057" exitCode=0 Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.501122 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfpnk" event={"ID":"256bfb3b-0b62-4c85-b687-8f84d248c1a4","Type":"ContainerDied","Data":"ed195d3cabbdb44f8b4be3d6d5021e17696247d3ac44322c75cfae752fe5f057"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.501685 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfpnk" event={"ID":"256bfb3b-0b62-4c85-b687-8f84d248c1a4","Type":"ContainerStarted","Data":"d4c568d037b0aceba1172be02e67d26e5f48967c2f55c65d9538656ff354846e"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.506727 4900 generic.go:334] "Generic (PLEG): container finished" podID="6b289267-ed9d-42c8-8aa7-ff762dc944b2" containerID="24a999f82f75daaa9aaa9a8873b8e93345a62c9d06f76b180d1408aa09f83359" exitCode=0 Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.506798 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-276c9" event={"ID":"6b289267-ed9d-42c8-8aa7-ff762dc944b2","Type":"ContainerDied","Data":"24a999f82f75daaa9aaa9a8873b8e93345a62c9d06f76b180d1408aa09f83359"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.509165 4900 generic.go:334] "Generic (PLEG): container finished" podID="6a6db4ab-d069-45ed-b01c-37508a8e2a78" containerID="58f9aafa2275e9034f9607eccdaff84db33cf6d7757cc0a280c46da303fa667a" exitCode=0 Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.509241 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" event={"ID":"6a6db4ab-d069-45ed-b01c-37508a8e2a78","Type":"ContainerDied","Data":"58f9aafa2275e9034f9607eccdaff84db33cf6d7757cc0a280c46da303fa667a"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.509270 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" event={"ID":"6a6db4ab-d069-45ed-b01c-37508a8e2a78","Type":"ContainerStarted","Data":"192540bb44d745859c7b3efe19288c7f7f3f948235b18b63a689eccc537edadd"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.512394 4900 generic.go:334] "Generic (PLEG): container finished" podID="32ed1538-6965-4e3b-900c-f1d66ae74adb" containerID="88dfd412f3cb057f0fc7dad09a4a77b5f326ceb5021aee76675bec48e75ff2f1" exitCode=0 Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.512437 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" event={"ID":"32ed1538-6965-4e3b-900c-f1d66ae74adb","Type":"ContainerDied","Data":"88dfd412f3cb057f0fc7dad09a4a77b5f326ceb5021aee76675bec48e75ff2f1"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.512509 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" event={"ID":"32ed1538-6965-4e3b-900c-f1d66ae74adb","Type":"ContainerStarted","Data":"ce30cba7c5b5c66778d14b65d98194d8f35b8d586c03221ce81acfc28d7e443b"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.515832 4900 generic.go:334] "Generic (PLEG): container finished" podID="7da5713f-20df-4ec0-b8c2-857404cd476e" containerID="2c5b1eb059e1370ccdb24fa4364af99d28a027734a06c4458f794ca88a527251" exitCode=0 Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.515925 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1530-account-create-update-jmbtq" event={"ID":"7da5713f-20df-4ec0-b8c2-857404cd476e","Type":"ContainerDied","Data":"2c5b1eb059e1370ccdb24fa4364af99d28a027734a06c4458f794ca88a527251"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.515955 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1530-account-create-update-jmbtq" event={"ID":"7da5713f-20df-4ec0-b8c2-857404cd476e","Type":"ContainerStarted","Data":"c7ba0003061de3e2adc0f699cd6e39ac538463e79b3dc122767ef8ecf965d400"} Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.521788 4900 generic.go:334] "Generic (PLEG): container finished" podID="e05fdeb9-4372-4614-8d2e-437a6467dea2" containerID="7d995ff735a2edfb811d120417d5555f27119882ba26a352202587f9d504bb8b" exitCode=0 Dec 02 15:15:52 crc kubenswrapper[4900]: I1202 15:15:52.521850 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7r5wn" event={"ID":"e05fdeb9-4372-4614-8d2e-437a6467dea2","Type":"ContainerDied","Data":"7d995ff735a2edfb811d120417d5555f27119882ba26a352202587f9d504bb8b"} Dec 02 15:15:53 crc kubenswrapper[4900]: I1202 15:15:53.968835 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.036709 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk59r\" (UniqueName: \"kubernetes.io/projected/6b289267-ed9d-42c8-8aa7-ff762dc944b2-kube-api-access-fk59r\") pod \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.036768 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b289267-ed9d-42c8-8aa7-ff762dc944b2-operator-scripts\") pod \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\" (UID: \"6b289267-ed9d-42c8-8aa7-ff762dc944b2\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.038512 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b289267-ed9d-42c8-8aa7-ff762dc944b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b289267-ed9d-42c8-8aa7-ff762dc944b2" (UID: "6b289267-ed9d-42c8-8aa7-ff762dc944b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.043447 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b289267-ed9d-42c8-8aa7-ff762dc944b2-kube-api-access-fk59r" (OuterVolumeSpecName: "kube-api-access-fk59r") pod "6b289267-ed9d-42c8-8aa7-ff762dc944b2" (UID: "6b289267-ed9d-42c8-8aa7-ff762dc944b2"). InnerVolumeSpecName "kube-api-access-fk59r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.139605 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk59r\" (UniqueName: \"kubernetes.io/projected/6b289267-ed9d-42c8-8aa7-ff762dc944b2-kube-api-access-fk59r\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.139661 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b289267-ed9d-42c8-8aa7-ff762dc944b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.187155 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.193480 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.206064 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.214066 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.224839 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.241208 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed1538-6965-4e3b-900c-f1d66ae74adb-operator-scripts\") pod \"32ed1538-6965-4e3b-900c-f1d66ae74adb\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.241300 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgcfm\" (UniqueName: \"kubernetes.io/projected/32ed1538-6965-4e3b-900c-f1d66ae74adb-kube-api-access-fgcfm\") pod \"32ed1538-6965-4e3b-900c-f1d66ae74adb\" (UID: \"32ed1538-6965-4e3b-900c-f1d66ae74adb\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.241330 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05fdeb9-4372-4614-8d2e-437a6467dea2-operator-scripts\") pod \"e05fdeb9-4372-4614-8d2e-437a6467dea2\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.241403 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brq9l\" (UniqueName: \"kubernetes.io/projected/e05fdeb9-4372-4614-8d2e-437a6467dea2-kube-api-access-brq9l\") pod \"e05fdeb9-4372-4614-8d2e-437a6467dea2\" (UID: \"e05fdeb9-4372-4614-8d2e-437a6467dea2\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.241811 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05fdeb9-4372-4614-8d2e-437a6467dea2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e05fdeb9-4372-4614-8d2e-437a6467dea2" (UID: "e05fdeb9-4372-4614-8d2e-437a6467dea2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.242345 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ed1538-6965-4e3b-900c-f1d66ae74adb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32ed1538-6965-4e3b-900c-f1d66ae74adb" (UID: "32ed1538-6965-4e3b-900c-f1d66ae74adb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.244440 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ed1538-6965-4e3b-900c-f1d66ae74adb-kube-api-access-fgcfm" (OuterVolumeSpecName: "kube-api-access-fgcfm") pod "32ed1538-6965-4e3b-900c-f1d66ae74adb" (UID: "32ed1538-6965-4e3b-900c-f1d66ae74adb"). InnerVolumeSpecName "kube-api-access-fgcfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.245099 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05fdeb9-4372-4614-8d2e-437a6467dea2-kube-api-access-brq9l" (OuterVolumeSpecName: "kube-api-access-brq9l") pod "e05fdeb9-4372-4614-8d2e-437a6467dea2" (UID: "e05fdeb9-4372-4614-8d2e-437a6467dea2"). InnerVolumeSpecName "kube-api-access-brq9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343267 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/256bfb3b-0b62-4c85-b687-8f84d248c1a4-operator-scripts\") pod \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343419 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6db4ab-d069-45ed-b01c-37508a8e2a78-operator-scripts\") pod \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343486 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlrx\" (UniqueName: \"kubernetes.io/projected/6a6db4ab-d069-45ed-b01c-37508a8e2a78-kube-api-access-7rlrx\") pod \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\" (UID: \"6a6db4ab-d069-45ed-b01c-37508a8e2a78\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343527 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z7v4\" (UniqueName: \"kubernetes.io/projected/7da5713f-20df-4ec0-b8c2-857404cd476e-kube-api-access-4z7v4\") pod \"7da5713f-20df-4ec0-b8c2-857404cd476e\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343565 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5713f-20df-4ec0-b8c2-857404cd476e-operator-scripts\") pod \"7da5713f-20df-4ec0-b8c2-857404cd476e\" (UID: \"7da5713f-20df-4ec0-b8c2-857404cd476e\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343630 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv4zw\" (UniqueName: \"kubernetes.io/projected/256bfb3b-0b62-4c85-b687-8f84d248c1a4-kube-api-access-gv4zw\") pod \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\" (UID: \"256bfb3b-0b62-4c85-b687-8f84d248c1a4\") " Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.343955 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/256bfb3b-0b62-4c85-b687-8f84d248c1a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "256bfb3b-0b62-4c85-b687-8f84d248c1a4" (UID: "256bfb3b-0b62-4c85-b687-8f84d248c1a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.344125 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brq9l\" (UniqueName: \"kubernetes.io/projected/e05fdeb9-4372-4614-8d2e-437a6467dea2-kube-api-access-brq9l\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.344119 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a6db4ab-d069-45ed-b01c-37508a8e2a78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a6db4ab-d069-45ed-b01c-37508a8e2a78" (UID: "6a6db4ab-d069-45ed-b01c-37508a8e2a78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.344173 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed1538-6965-4e3b-900c-f1d66ae74adb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.344197 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgcfm\" (UniqueName: \"kubernetes.io/projected/32ed1538-6965-4e3b-900c-f1d66ae74adb-kube-api-access-fgcfm\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.344213 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05fdeb9-4372-4614-8d2e-437a6467dea2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.344282 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da5713f-20df-4ec0-b8c2-857404cd476e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7da5713f-20df-4ec0-b8c2-857404cd476e" (UID: "7da5713f-20df-4ec0-b8c2-857404cd476e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.346215 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256bfb3b-0b62-4c85-b687-8f84d248c1a4-kube-api-access-gv4zw" (OuterVolumeSpecName: "kube-api-access-gv4zw") pod "256bfb3b-0b62-4c85-b687-8f84d248c1a4" (UID: "256bfb3b-0b62-4c85-b687-8f84d248c1a4"). InnerVolumeSpecName "kube-api-access-gv4zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.346659 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6db4ab-d069-45ed-b01c-37508a8e2a78-kube-api-access-7rlrx" (OuterVolumeSpecName: "kube-api-access-7rlrx") pod "6a6db4ab-d069-45ed-b01c-37508a8e2a78" (UID: "6a6db4ab-d069-45ed-b01c-37508a8e2a78"). InnerVolumeSpecName "kube-api-access-7rlrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.346712 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da5713f-20df-4ec0-b8c2-857404cd476e-kube-api-access-4z7v4" (OuterVolumeSpecName: "kube-api-access-4z7v4") pod "7da5713f-20df-4ec0-b8c2-857404cd476e" (UID: "7da5713f-20df-4ec0-b8c2-857404cd476e"). InnerVolumeSpecName "kube-api-access-4z7v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.446055 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6db4ab-d069-45ed-b01c-37508a8e2a78-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.446089 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlrx\" (UniqueName: \"kubernetes.io/projected/6a6db4ab-d069-45ed-b01c-37508a8e2a78-kube-api-access-7rlrx\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.446102 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z7v4\" (UniqueName: \"kubernetes.io/projected/7da5713f-20df-4ec0-b8c2-857404cd476e-kube-api-access-4z7v4\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.446114 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5713f-20df-4ec0-b8c2-857404cd476e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.446124 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv4zw\" (UniqueName: \"kubernetes.io/projected/256bfb3b-0b62-4c85-b687-8f84d248c1a4-kube-api-access-gv4zw\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.446160 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/256bfb3b-0b62-4c85-b687-8f84d248c1a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.550499 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfpnk" event={"ID":"256bfb3b-0b62-4c85-b687-8f84d248c1a4","Type":"ContainerDied","Data":"d4c568d037b0aceba1172be02e67d26e5f48967c2f55c65d9538656ff354846e"} Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.550523 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfpnk" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.550546 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c568d037b0aceba1172be02e67d26e5f48967c2f55c65d9538656ff354846e" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.552423 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-276c9" event={"ID":"6b289267-ed9d-42c8-8aa7-ff762dc944b2","Type":"ContainerDied","Data":"7157dac4453d344bad005fbd245849969464cbae3bdfedc22e0f2c0d1c53587a"} Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.552455 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7157dac4453d344bad005fbd245849969464cbae3bdfedc22e0f2c0d1c53587a" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.552519 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-276c9" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.556015 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" event={"ID":"6a6db4ab-d069-45ed-b01c-37508a8e2a78","Type":"ContainerDied","Data":"192540bb44d745859c7b3efe19288c7f7f3f948235b18b63a689eccc537edadd"} Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.556051 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="192540bb44d745859c7b3efe19288c7f7f3f948235b18b63a689eccc537edadd" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.556049 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12e0-account-create-update-5cz74" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.561372 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" event={"ID":"32ed1538-6965-4e3b-900c-f1d66ae74adb","Type":"ContainerDied","Data":"ce30cba7c5b5c66778d14b65d98194d8f35b8d586c03221ce81acfc28d7e443b"} Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.561427 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-35fd-account-create-update-kbfxd" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.561435 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce30cba7c5b5c66778d14b65d98194d8f35b8d586c03221ce81acfc28d7e443b" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.565237 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1530-account-create-update-jmbtq" event={"ID":"7da5713f-20df-4ec0-b8c2-857404cd476e","Type":"ContainerDied","Data":"c7ba0003061de3e2adc0f699cd6e39ac538463e79b3dc122767ef8ecf965d400"} Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.565286 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ba0003061de3e2adc0f699cd6e39ac538463e79b3dc122767ef8ecf965d400" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.565358 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1530-account-create-update-jmbtq" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.572819 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7r5wn" event={"ID":"e05fdeb9-4372-4614-8d2e-437a6467dea2","Type":"ContainerDied","Data":"a01d58b58e8c198d8633b402c6fe3af974b28f9096d288ca57c46f626de576de"} Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.572863 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01d58b58e8c198d8633b402c6fe3af974b28f9096d288ca57c46f626de576de" Dec 02 15:15:54 crc kubenswrapper[4900]: I1202 15:15:54.573092 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r5wn" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.111444 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cn752"] Dec 02 15:15:56 crc kubenswrapper[4900]: E1202 15:15:56.112211 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05fdeb9-4372-4614-8d2e-437a6467dea2" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112230 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05fdeb9-4372-4614-8d2e-437a6467dea2" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: E1202 15:15:56.112246 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5713f-20df-4ec0-b8c2-857404cd476e" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112254 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5713f-20df-4ec0-b8c2-857404cd476e" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: E1202 15:15:56.112274 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b289267-ed9d-42c8-8aa7-ff762dc944b2" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112285 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b289267-ed9d-42c8-8aa7-ff762dc944b2" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: E1202 15:15:56.112308 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed1538-6965-4e3b-900c-f1d66ae74adb" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112315 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed1538-6965-4e3b-900c-f1d66ae74adb" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: E1202 15:15:56.112330 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256bfb3b-0b62-4c85-b687-8f84d248c1a4" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112336 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="256bfb3b-0b62-4c85-b687-8f84d248c1a4" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: E1202 15:15:56.112347 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6db4ab-d069-45ed-b01c-37508a8e2a78" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112355 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6db4ab-d069-45ed-b01c-37508a8e2a78" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112561 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="256bfb3b-0b62-4c85-b687-8f84d248c1a4" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112583 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da5713f-20df-4ec0-b8c2-857404cd476e" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112601 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b289267-ed9d-42c8-8aa7-ff762dc944b2" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112614 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05fdeb9-4372-4614-8d2e-437a6467dea2" containerName="mariadb-database-create" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112630 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6db4ab-d069-45ed-b01c-37508a8e2a78" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.112666 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ed1538-6965-4e3b-900c-f1d66ae74adb" containerName="mariadb-account-create-update" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.113375 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.116466 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.117028 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcxf9" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.117190 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.127376 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cn752"] Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.182108 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-scripts\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.182284 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.182382 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-config-data\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.182569 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ghrd\" (UniqueName: \"kubernetes.io/projected/f706357c-15e7-4c4c-abbb-d0f793926d53-kube-api-access-9ghrd\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.284559 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-scripts\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.284683 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.284724 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-config-data\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.284796 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ghrd\" (UniqueName: \"kubernetes.io/projected/f706357c-15e7-4c4c-abbb-d0f793926d53-kube-api-access-9ghrd\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.291614 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.291684 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-config-data\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.293148 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-scripts\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.309636 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ghrd\" (UniqueName: \"kubernetes.io/projected/f706357c-15e7-4c4c-abbb-d0f793926d53-kube-api-access-9ghrd\") pod \"nova-cell0-conductor-db-sync-cn752\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.434262 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:15:56 crc kubenswrapper[4900]: I1202 15:15:56.956095 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cn752"] Dec 02 15:15:56 crc kubenswrapper[4900]: W1202 15:15:56.964795 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf706357c_15e7_4c4c_abbb_d0f793926d53.slice/crio-45febc43859ef813123034d9eb7c247d68c500f9226c2894d5a68a0d96322c1f WatchSource:0}: Error finding container 45febc43859ef813123034d9eb7c247d68c500f9226c2894d5a68a0d96322c1f: Status 404 returned error can't find the container with id 45febc43859ef813123034d9eb7c247d68c500f9226c2894d5a68a0d96322c1f Dec 02 15:15:57 crc kubenswrapper[4900]: I1202 15:15:57.600845 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cn752" event={"ID":"f706357c-15e7-4c4c-abbb-d0f793926d53","Type":"ContainerStarted","Data":"399768edcc20c611c7815f0631aa03d6b5b1e68840586eef3cbc72af56107695"} Dec 02 15:15:57 crc kubenswrapper[4900]: I1202 15:15:57.601262 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cn752" event={"ID":"f706357c-15e7-4c4c-abbb-d0f793926d53","Type":"ContainerStarted","Data":"45febc43859ef813123034d9eb7c247d68c500f9226c2894d5a68a0d96322c1f"} Dec 02 15:15:57 crc kubenswrapper[4900]: I1202 15:15:57.631366 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cn752" podStartSLOduration=1.631339594 podStartE2EDuration="1.631339594s" podCreationTimestamp="2025-12-02 15:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:15:57.621365914 +0000 UTC m=+5603.037179795" watchObservedRunningTime="2025-12-02 15:15:57.631339594 +0000 UTC m=+5603.047153455" Dec 02 15:16:01 crc kubenswrapper[4900]: I1202 15:16:01.321455 4900 scope.go:117] "RemoveContainer" containerID="ee7b3a6ef268d5351c724802c5e4dd8fa499c56013249d49a7d2c52e3fc2eec9" Dec 02 15:16:01 crc kubenswrapper[4900]: I1202 15:16:01.353244 4900 scope.go:117] "RemoveContainer" containerID="10d4cf8329f1fd707442acf67a1e32113932d8ab6b32f1c4af4a69eb890f7c87" Dec 02 15:16:02 crc kubenswrapper[4900]: I1202 15:16:02.656497 4900 generic.go:334] "Generic (PLEG): container finished" podID="f706357c-15e7-4c4c-abbb-d0f793926d53" containerID="399768edcc20c611c7815f0631aa03d6b5b1e68840586eef3cbc72af56107695" exitCode=0 Dec 02 15:16:02 crc kubenswrapper[4900]: I1202 15:16:02.656590 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cn752" event={"ID":"f706357c-15e7-4c4c-abbb-d0f793926d53","Type":"ContainerDied","Data":"399768edcc20c611c7815f0631aa03d6b5b1e68840586eef3cbc72af56107695"} Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.060597 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.152315 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ghrd\" (UniqueName: \"kubernetes.io/projected/f706357c-15e7-4c4c-abbb-d0f793926d53-kube-api-access-9ghrd\") pod \"f706357c-15e7-4c4c-abbb-d0f793926d53\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.152634 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-config-data\") pod \"f706357c-15e7-4c4c-abbb-d0f793926d53\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.152760 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-combined-ca-bundle\") pod \"f706357c-15e7-4c4c-abbb-d0f793926d53\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.152780 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-scripts\") pod \"f706357c-15e7-4c4c-abbb-d0f793926d53\" (UID: \"f706357c-15e7-4c4c-abbb-d0f793926d53\") " Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.157687 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-scripts" (OuterVolumeSpecName: "scripts") pod "f706357c-15e7-4c4c-abbb-d0f793926d53" (UID: "f706357c-15e7-4c4c-abbb-d0f793926d53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.162736 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f706357c-15e7-4c4c-abbb-d0f793926d53-kube-api-access-9ghrd" (OuterVolumeSpecName: "kube-api-access-9ghrd") pod "f706357c-15e7-4c4c-abbb-d0f793926d53" (UID: "f706357c-15e7-4c4c-abbb-d0f793926d53"). InnerVolumeSpecName "kube-api-access-9ghrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.188197 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-config-data" (OuterVolumeSpecName: "config-data") pod "f706357c-15e7-4c4c-abbb-d0f793926d53" (UID: "f706357c-15e7-4c4c-abbb-d0f793926d53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.192139 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f706357c-15e7-4c4c-abbb-d0f793926d53" (UID: "f706357c-15e7-4c4c-abbb-d0f793926d53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.254494 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.254866 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.255115 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ghrd\" (UniqueName: \"kubernetes.io/projected/f706357c-15e7-4c4c-abbb-d0f793926d53-kube-api-access-9ghrd\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.255243 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f706357c-15e7-4c4c-abbb-d0f793926d53-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.687521 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cn752" event={"ID":"f706357c-15e7-4c4c-abbb-d0f793926d53","Type":"ContainerDied","Data":"45febc43859ef813123034d9eb7c247d68c500f9226c2894d5a68a0d96322c1f"} Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.687567 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45febc43859ef813123034d9eb7c247d68c500f9226c2894d5a68a0d96322c1f" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.687698 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cn752" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.773422 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:16:04 crc kubenswrapper[4900]: E1202 15:16:04.774170 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f706357c-15e7-4c4c-abbb-d0f793926d53" containerName="nova-cell0-conductor-db-sync" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.774285 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f706357c-15e7-4c4c-abbb-d0f793926d53" containerName="nova-cell0-conductor-db-sync" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.774579 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f706357c-15e7-4c4c-abbb-d0f793926d53" containerName="nova-cell0-conductor-db-sync" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.775424 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.778804 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcxf9" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.778963 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.799139 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.864567 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.864774 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57fc\" (UniqueName: \"kubernetes.io/projected/9548e976-6686-45d3-9b04-74b567fc4b5d-kube-api-access-l57fc\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.864836 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.967150 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.967283 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57fc\" (UniqueName: \"kubernetes.io/projected/9548e976-6686-45d3-9b04-74b567fc4b5d-kube-api-access-l57fc\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.967364 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.973570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.975074 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:04 crc kubenswrapper[4900]: I1202 15:16:04.993683 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57fc\" (UniqueName: \"kubernetes.io/projected/9548e976-6686-45d3-9b04-74b567fc4b5d-kube-api-access-l57fc\") pod \"nova-cell0-conductor-0\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:05 crc kubenswrapper[4900]: I1202 15:16:05.112600 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:05 crc kubenswrapper[4900]: I1202 15:16:05.596022 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:16:05 crc kubenswrapper[4900]: I1202 15:16:05.705446 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9548e976-6686-45d3-9b04-74b567fc4b5d","Type":"ContainerStarted","Data":"a93b9be05c7ecadc2ed8b755a761c0590001dfc7ace84bcea6ff28ad693d0467"} Dec 02 15:16:06 crc kubenswrapper[4900]: I1202 15:16:06.721251 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9548e976-6686-45d3-9b04-74b567fc4b5d","Type":"ContainerStarted","Data":"1f6c3681e4f06007f3b716ee5692554a89699f0a7937879ffce2af1d3b01fcd7"} Dec 02 15:16:06 crc kubenswrapper[4900]: I1202 15:16:06.721753 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:06 crc kubenswrapper[4900]: I1202 15:16:06.759847 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.759821464 podStartE2EDuration="2.759821464s" podCreationTimestamp="2025-12-02 15:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:06.740708998 +0000 UTC m=+5612.156522869" watchObservedRunningTime="2025-12-02 15:16:06.759821464 +0000 UTC m=+5612.175635355" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.160810 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.604763 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gf82k"] Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.606142 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.608907 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.610021 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.617533 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf82k"] Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.742944 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.744529 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.746916 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.804819 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.804874 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-config-data\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.804986 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-scripts\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.805048 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnbk\" (UniqueName: \"kubernetes.io/projected/cebb257b-4ffb-40bd-a873-77699f11ee7f-kube-api-access-5gnbk\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.817729 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-scripts\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914390 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-config-data\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914425 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914450 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e8928c5-5d77-4a39-a11f-55d02b1e9296-logs\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914488 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gnbk\" (UniqueName: \"kubernetes.io/projected/cebb257b-4ffb-40bd-a873-77699f11ee7f-kube-api-access-5gnbk\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914529 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqfw\" (UniqueName: \"kubernetes.io/projected/7e8928c5-5d77-4a39-a11f-55d02b1e9296-kube-api-access-pmqfw\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914621 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.914670 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-config-data\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.933632 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-scripts\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.935542 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:10 crc kubenswrapper[4900]: I1202 15:16:10.946582 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-config-data\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.035098 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gnbk\" (UniqueName: \"kubernetes.io/projected/cebb257b-4ffb-40bd-a873-77699f11ee7f-kube-api-access-5gnbk\") pod \"nova-cell0-cell-mapping-gf82k\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.036413 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.036477 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-config-data\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.036535 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.036562 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e8928c5-5d77-4a39-a11f-55d02b1e9296-logs\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.036622 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqfw\" (UniqueName: \"kubernetes.io/projected/7e8928c5-5d77-4a39-a11f-55d02b1e9296-kube-api-access-pmqfw\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.040920 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e8928c5-5d77-4a39-a11f-55d02b1e9296-logs\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.043840 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.041110 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-config-data\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.047932 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.061848 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.065904 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.076491 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqfw\" (UniqueName: \"kubernetes.io/projected/7e8928c5-5d77-4a39-a11f-55d02b1e9296-kube-api-access-pmqfw\") pod \"nova-api-0\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.089387 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.091005 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.093243 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.108437 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.120321 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.121705 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.131541 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.145853 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5d4667dc-8gnfx"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.148091 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.169468 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.177532 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5d4667dc-8gnfx"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.184171 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.231930 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.244236 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-config-data\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.244300 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bqh\" (UniqueName: \"kubernetes.io/projected/a4a0858a-5a98-480b-a879-ecebcccd694f-kube-api-access-84bqh\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.244333 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.245558 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.245612 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-config-data\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.245778 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2nb\" (UniqueName: \"kubernetes.io/projected/385bfba8-cd50-4143-9e34-1245fbfe7d9b-kube-api-access-zk2nb\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.245888 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385bfba8-cd50-4143-9e34-1245fbfe7d9b-logs\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.245959 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.246055 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qms8\" (UniqueName: \"kubernetes.io/projected/11e8048a-0901-478c-bf94-cc9fb894bdad-kube-api-access-9qms8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.246081 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.347557 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.347983 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-config\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348032 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qms8\" (UniqueName: \"kubernetes.io/projected/11e8048a-0901-478c-bf94-cc9fb894bdad-kube-api-access-9qms8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348059 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348083 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-dns-svc\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348132 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-config-data\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348161 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84bqh\" (UniqueName: \"kubernetes.io/projected/a4a0858a-5a98-480b-a879-ecebcccd694f-kube-api-access-84bqh\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348190 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348227 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348252 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmsd4\" (UniqueName: \"kubernetes.io/projected/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-kube-api-access-jmsd4\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348290 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348311 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-config-data\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348368 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2nb\" (UniqueName: \"kubernetes.io/projected/385bfba8-cd50-4143-9e34-1245fbfe7d9b-kube-api-access-zk2nb\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348425 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385bfba8-cd50-4143-9e34-1245fbfe7d9b-logs\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.348456 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.355682 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.355959 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.356076 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385bfba8-cd50-4143-9e34-1245fbfe7d9b-logs\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.356636 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.368324 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-config-data\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.368889 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-config-data\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.368905 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.377340 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84bqh\" (UniqueName: \"kubernetes.io/projected/a4a0858a-5a98-480b-a879-ecebcccd694f-kube-api-access-84bqh\") pod \"nova-scheduler-0\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.380374 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qms8\" (UniqueName: \"kubernetes.io/projected/11e8048a-0901-478c-bf94-cc9fb894bdad-kube-api-access-9qms8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.383144 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2nb\" (UniqueName: \"kubernetes.io/projected/385bfba8-cd50-4143-9e34-1245fbfe7d9b-kube-api-access-zk2nb\") pod \"nova-metadata-0\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.432575 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.445008 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.449705 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.449820 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-config\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.449872 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-dns-svc\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.449929 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.449953 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmsd4\" (UniqueName: \"kubernetes.io/projected/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-kube-api-access-jmsd4\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.450610 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.451224 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-dns-svc\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.451291 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.451405 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-config\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.457836 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.468505 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmsd4\" (UniqueName: \"kubernetes.io/projected/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-kube-api-access-jmsd4\") pod \"dnsmasq-dns-7d5d4667dc-8gnfx\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.472128 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.523283 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf5cj"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.524356 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.527768 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.528130 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.545160 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf5cj"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.551795 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-scripts\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.551909 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.551935 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-config-data\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.552024 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfzm\" (UniqueName: \"kubernetes.io/projected/fb90faf1-bd20-4634-93f6-977be90a5a93-kube-api-access-jjfzm\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.656769 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.656823 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-config-data\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.656928 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjfzm\" (UniqueName: \"kubernetes.io/projected/fb90faf1-bd20-4634-93f6-977be90a5a93-kube-api-access-jjfzm\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.656999 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-scripts\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.664387 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-config-data\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.664742 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-scripts\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.666124 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.672411 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjfzm\" (UniqueName: \"kubernetes.io/projected/fb90faf1-bd20-4634-93f6-977be90a5a93-kube-api-access-jjfzm\") pod \"nova-cell1-conductor-db-sync-jf5cj\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.725453 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.812230 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf82k"] Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.823858 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e8928c5-5d77-4a39-a11f-55d02b1e9296","Type":"ContainerStarted","Data":"5a97a31bcc6864db74db3c0b49dc96f9013978daa65e7dc37962eb8562a2eb5d"} Dec 02 15:16:11 crc kubenswrapper[4900]: W1202 15:16:11.831522 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcebb257b_4ffb_40bd_a873_77699f11ee7f.slice/crio-8f2caf12071bb82e5b52dddf36bcecc5b2ecb829df90657796b1f5f6a4893e27 WatchSource:0}: Error finding container 8f2caf12071bb82e5b52dddf36bcecc5b2ecb829df90657796b1f5f6a4893e27: Status 404 returned error can't find the container with id 8f2caf12071bb82e5b52dddf36bcecc5b2ecb829df90657796b1f5f6a4893e27 Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.860792 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:11 crc kubenswrapper[4900]: I1202 15:16:11.927666 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.009824 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.025168 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.107061 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5d4667dc-8gnfx"] Dec 02 15:16:12 crc kubenswrapper[4900]: W1202 15:16:12.116893 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc662ef3_57ef_4f0f_8a61_5a2bce3ba277.slice/crio-b6f3150e5dcffa99898bd94a00ca94a589707ba5ae8ea144e71411d248fd7ae3 WatchSource:0}: Error finding container b6f3150e5dcffa99898bd94a00ca94a589707ba5ae8ea144e71411d248fd7ae3: Status 404 returned error can't find the container with id b6f3150e5dcffa99898bd94a00ca94a589707ba5ae8ea144e71411d248fd7ae3 Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.325895 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf5cj"] Dec 02 15:16:12 crc kubenswrapper[4900]: W1202 15:16:12.333811 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb90faf1_bd20_4634_93f6_977be90a5a93.slice/crio-5155e7f6675b43d4b880f3f38a95f9c2a7288db5da91b37957cb2c286abaaa5d WatchSource:0}: Error finding container 5155e7f6675b43d4b880f3f38a95f9c2a7288db5da91b37957cb2c286abaaa5d: Status 404 returned error can't find the container with id 5155e7f6675b43d4b880f3f38a95f9c2a7288db5da91b37957cb2c286abaaa5d Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.835747 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385bfba8-cd50-4143-9e34-1245fbfe7d9b","Type":"ContainerStarted","Data":"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.835799 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385bfba8-cd50-4143-9e34-1245fbfe7d9b","Type":"ContainerStarted","Data":"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.835812 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385bfba8-cd50-4143-9e34-1245fbfe7d9b","Type":"ContainerStarted","Data":"75de0147f1cff837a3d07fef56f04b9aa8aec8b937addd6880f9db6fcb11bb9e"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.839302 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e8928c5-5d77-4a39-a11f-55d02b1e9296","Type":"ContainerStarted","Data":"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.839347 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e8928c5-5d77-4a39-a11f-55d02b1e9296","Type":"ContainerStarted","Data":"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.841297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4a0858a-5a98-480b-a879-ecebcccd694f","Type":"ContainerStarted","Data":"ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.841343 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4a0858a-5a98-480b-a879-ecebcccd694f","Type":"ContainerStarted","Data":"3730cbda06234c73cce03f5bf559aacb9ec07b11feb4dc9424890010c91a7a17"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.844331 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf82k" event={"ID":"cebb257b-4ffb-40bd-a873-77699f11ee7f","Type":"ContainerStarted","Data":"75adc67660f25f19d773cae4817514e6acb04f8d1ff7bd4636698aed4ba23d3d"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.844372 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf82k" event={"ID":"cebb257b-4ffb-40bd-a873-77699f11ee7f","Type":"ContainerStarted","Data":"8f2caf12071bb82e5b52dddf36bcecc5b2ecb829df90657796b1f5f6a4893e27"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.846321 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11e8048a-0901-478c-bf94-cc9fb894bdad","Type":"ContainerStarted","Data":"5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.846362 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11e8048a-0901-478c-bf94-cc9fb894bdad","Type":"ContainerStarted","Data":"9047e6f5a61266ad9bef85fda2d04d3c8f88748d7e8294f554f25b9d374d0de1"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.848320 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" event={"ID":"fb90faf1-bd20-4634-93f6-977be90a5a93","Type":"ContainerStarted","Data":"d3dfb294061abdd68c573fddbd64b52ce0d022bfedc07cd5694677da4da351f4"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.848374 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" event={"ID":"fb90faf1-bd20-4634-93f6-977be90a5a93","Type":"ContainerStarted","Data":"5155e7f6675b43d4b880f3f38a95f9c2a7288db5da91b37957cb2c286abaaa5d"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.871988 4900 generic.go:334] "Generic (PLEG): container finished" podID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerID="21ceda574dd748eaffa3b161eec4d4c5084f87e45eb3188fd2ed668b809e0b71" exitCode=0 Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.872041 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" event={"ID":"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277","Type":"ContainerDied","Data":"21ceda574dd748eaffa3b161eec4d4c5084f87e45eb3188fd2ed668b809e0b71"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.872074 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" event={"ID":"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277","Type":"ContainerStarted","Data":"b6f3150e5dcffa99898bd94a00ca94a589707ba5ae8ea144e71411d248fd7ae3"} Dec 02 15:16:12 crc kubenswrapper[4900]: I1202 15:16:12.989803 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" podStartSLOduration=1.989782301 podStartE2EDuration="1.989782301s" podCreationTimestamp="2025-12-02 15:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:12.967411543 +0000 UTC m=+5618.383225394" watchObservedRunningTime="2025-12-02 15:16:12.989782301 +0000 UTC m=+5618.405596152" Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.082827 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.082805122 podStartE2EDuration="3.082805122s" podCreationTimestamp="2025-12-02 15:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:12.891993697 +0000 UTC m=+5618.307807548" watchObservedRunningTime="2025-12-02 15:16:13.082805122 +0000 UTC m=+5618.498618973" Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.139414 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.13939526 podStartE2EDuration="3.13939526s" podCreationTimestamp="2025-12-02 15:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:13.031186323 +0000 UTC m=+5618.447000174" watchObservedRunningTime="2025-12-02 15:16:13.13939526 +0000 UTC m=+5618.555209111" Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.146522 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.146498649 podStartE2EDuration="3.146498649s" podCreationTimestamp="2025-12-02 15:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:13.090072116 +0000 UTC m=+5618.505885967" watchObservedRunningTime="2025-12-02 15:16:13.146498649 +0000 UTC m=+5618.562312500" Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.157470 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.157456007 podStartE2EDuration="3.157456007s" podCreationTimestamp="2025-12-02 15:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:13.148924997 +0000 UTC m=+5618.564738838" watchObservedRunningTime="2025-12-02 15:16:13.157456007 +0000 UTC m=+5618.573269858" Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.167272 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gf82k" podStartSLOduration=3.16718678 podStartE2EDuration="3.16718678s" podCreationTimestamp="2025-12-02 15:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:13.16684005 +0000 UTC m=+5618.582653901" watchObservedRunningTime="2025-12-02 15:16:13.16718678 +0000 UTC m=+5618.583000631" Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.884695 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" event={"ID":"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277","Type":"ContainerStarted","Data":"68b0017238675fcf2907e4d9f261e64c42689db03ae2e5eaa9c74b49fa64d531"} Dec 02 15:16:13 crc kubenswrapper[4900]: I1202 15:16:13.908909 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" podStartSLOduration=3.908862714 podStartE2EDuration="3.908862714s" podCreationTimestamp="2025-12-02 15:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:13.906005464 +0000 UTC m=+5619.321819325" watchObservedRunningTime="2025-12-02 15:16:13.908862714 +0000 UTC m=+5619.324676565" Dec 02 15:16:14 crc kubenswrapper[4900]: I1202 15:16:14.894700 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:15 crc kubenswrapper[4900]: I1202 15:16:15.908513 4900 generic.go:334] "Generic (PLEG): container finished" podID="fb90faf1-bd20-4634-93f6-977be90a5a93" containerID="d3dfb294061abdd68c573fddbd64b52ce0d022bfedc07cd5694677da4da351f4" exitCode=0 Dec 02 15:16:15 crc kubenswrapper[4900]: I1202 15:16:15.910177 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" event={"ID":"fb90faf1-bd20-4634-93f6-977be90a5a93","Type":"ContainerDied","Data":"d3dfb294061abdd68c573fddbd64b52ce0d022bfedc07cd5694677da4da351f4"} Dec 02 15:16:16 crc kubenswrapper[4900]: I1202 15:16:16.433599 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:16:16 crc kubenswrapper[4900]: I1202 15:16:16.434994 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:16:16 crc kubenswrapper[4900]: I1202 15:16:16.445671 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 15:16:16 crc kubenswrapper[4900]: I1202 15:16:16.459020 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:16 crc kubenswrapper[4900]: I1202 15:16:16.921326 4900 generic.go:334] "Generic (PLEG): container finished" podID="cebb257b-4ffb-40bd-a873-77699f11ee7f" containerID="75adc67660f25f19d773cae4817514e6acb04f8d1ff7bd4636698aed4ba23d3d" exitCode=0 Dec 02 15:16:16 crc kubenswrapper[4900]: I1202 15:16:16.931128 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf82k" event={"ID":"cebb257b-4ffb-40bd-a873-77699f11ee7f","Type":"ContainerDied","Data":"75adc67660f25f19d773cae4817514e6acb04f8d1ff7bd4636698aed4ba23d3d"} Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.401950 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.503702 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjfzm\" (UniqueName: \"kubernetes.io/projected/fb90faf1-bd20-4634-93f6-977be90a5a93-kube-api-access-jjfzm\") pod \"fb90faf1-bd20-4634-93f6-977be90a5a93\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.503762 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-combined-ca-bundle\") pod \"fb90faf1-bd20-4634-93f6-977be90a5a93\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.503864 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-scripts\") pod \"fb90faf1-bd20-4634-93f6-977be90a5a93\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.504876 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-config-data\") pod \"fb90faf1-bd20-4634-93f6-977be90a5a93\" (UID: \"fb90faf1-bd20-4634-93f6-977be90a5a93\") " Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.510814 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-scripts" (OuterVolumeSpecName: "scripts") pod "fb90faf1-bd20-4634-93f6-977be90a5a93" (UID: "fb90faf1-bd20-4634-93f6-977be90a5a93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.510859 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb90faf1-bd20-4634-93f6-977be90a5a93-kube-api-access-jjfzm" (OuterVolumeSpecName: "kube-api-access-jjfzm") pod "fb90faf1-bd20-4634-93f6-977be90a5a93" (UID: "fb90faf1-bd20-4634-93f6-977be90a5a93"). InnerVolumeSpecName "kube-api-access-jjfzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.560296 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb90faf1-bd20-4634-93f6-977be90a5a93" (UID: "fb90faf1-bd20-4634-93f6-977be90a5a93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.571765 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-config-data" (OuterVolumeSpecName: "config-data") pod "fb90faf1-bd20-4634-93f6-977be90a5a93" (UID: "fb90faf1-bd20-4634-93f6-977be90a5a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.606695 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.606722 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjfzm\" (UniqueName: \"kubernetes.io/projected/fb90faf1-bd20-4634-93f6-977be90a5a93-kube-api-access-jjfzm\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.606734 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.606742 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb90faf1-bd20-4634-93f6-977be90a5a93-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.937039 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" event={"ID":"fb90faf1-bd20-4634-93f6-977be90a5a93","Type":"ContainerDied","Data":"5155e7f6675b43d4b880f3f38a95f9c2a7288db5da91b37957cb2c286abaaa5d"} Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.937085 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jf5cj" Dec 02 15:16:17 crc kubenswrapper[4900]: I1202 15:16:17.937109 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5155e7f6675b43d4b880f3f38a95f9c2a7288db5da91b37957cb2c286abaaa5d" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.066893 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:16:18 crc kubenswrapper[4900]: E1202 15:16:18.068060 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb90faf1-bd20-4634-93f6-977be90a5a93" containerName="nova-cell1-conductor-db-sync" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.068081 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb90faf1-bd20-4634-93f6-977be90a5a93" containerName="nova-cell1-conductor-db-sync" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.068544 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb90faf1-bd20-4634-93f6-977be90a5a93" containerName="nova-cell1-conductor-db-sync" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.069836 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.094286 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.117878 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.117953 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmrb\" (UniqueName: \"kubernetes.io/projected/64ee25b4-c05d-4fa2-97fd-72687a003c57-kube-api-access-wgmrb\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.117984 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.119204 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.219590 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmrb\" (UniqueName: \"kubernetes.io/projected/64ee25b4-c05d-4fa2-97fd-72687a003c57-kube-api-access-wgmrb\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.219681 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.219842 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.227504 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.251807 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.251888 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmrb\" (UniqueName: \"kubernetes.io/projected/64ee25b4-c05d-4fa2-97fd-72687a003c57-kube-api-access-wgmrb\") pod \"nova-cell1-conductor-0\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.370128 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.415980 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.422823 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-scripts\") pod \"cebb257b-4ffb-40bd-a873-77699f11ee7f\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.423032 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-config-data\") pod \"cebb257b-4ffb-40bd-a873-77699f11ee7f\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.423083 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-combined-ca-bundle\") pod \"cebb257b-4ffb-40bd-a873-77699f11ee7f\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.423122 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gnbk\" (UniqueName: \"kubernetes.io/projected/cebb257b-4ffb-40bd-a873-77699f11ee7f-kube-api-access-5gnbk\") pod \"cebb257b-4ffb-40bd-a873-77699f11ee7f\" (UID: \"cebb257b-4ffb-40bd-a873-77699f11ee7f\") " Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.429312 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebb257b-4ffb-40bd-a873-77699f11ee7f-kube-api-access-5gnbk" (OuterVolumeSpecName: "kube-api-access-5gnbk") pod "cebb257b-4ffb-40bd-a873-77699f11ee7f" (UID: "cebb257b-4ffb-40bd-a873-77699f11ee7f"). InnerVolumeSpecName "kube-api-access-5gnbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.430540 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-scripts" (OuterVolumeSpecName: "scripts") pod "cebb257b-4ffb-40bd-a873-77699f11ee7f" (UID: "cebb257b-4ffb-40bd-a873-77699f11ee7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.448853 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-config-data" (OuterVolumeSpecName: "config-data") pod "cebb257b-4ffb-40bd-a873-77699f11ee7f" (UID: "cebb257b-4ffb-40bd-a873-77699f11ee7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.473433 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cebb257b-4ffb-40bd-a873-77699f11ee7f" (UID: "cebb257b-4ffb-40bd-a873-77699f11ee7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.525803 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.525850 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.525871 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cebb257b-4ffb-40bd-a873-77699f11ee7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.525890 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gnbk\" (UniqueName: \"kubernetes.io/projected/cebb257b-4ffb-40bd-a873-77699f11ee7f-kube-api-access-5gnbk\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.883186 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.950442 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64ee25b4-c05d-4fa2-97fd-72687a003c57","Type":"ContainerStarted","Data":"3779e4e1c65b0520d8171ef6f15760ebfb2a07011fc775fbc362856373d962bc"} Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.953088 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf82k" event={"ID":"cebb257b-4ffb-40bd-a873-77699f11ee7f","Type":"ContainerDied","Data":"8f2caf12071bb82e5b52dddf36bcecc5b2ecb829df90657796b1f5f6a4893e27"} Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.953112 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f2caf12071bb82e5b52dddf36bcecc5b2ecb829df90657796b1f5f6a4893e27" Dec 02 15:16:18 crc kubenswrapper[4900]: I1202 15:16:18.953248 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf82k" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.149957 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.150406 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-log" containerID="cri-o://c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976" gracePeriod=30 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.150537 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-api" containerID="cri-o://c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34" gracePeriod=30 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.164326 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.164554 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a4a0858a-5a98-480b-a879-ecebcccd694f" containerName="nova-scheduler-scheduler" containerID="cri-o://ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e" gracePeriod=30 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.222912 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.223154 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-log" containerID="cri-o://a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a" gracePeriod=30 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.223796 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-metadata" containerID="cri-o://01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12" gracePeriod=30 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.697547 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.764733 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e8928c5-5d77-4a39-a11f-55d02b1e9296-logs\") pod \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.764823 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-config-data\") pod \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.764873 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-combined-ca-bundle\") pod \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.764930 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqfw\" (UniqueName: \"kubernetes.io/projected/7e8928c5-5d77-4a39-a11f-55d02b1e9296-kube-api-access-pmqfw\") pod \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\" (UID: \"7e8928c5-5d77-4a39-a11f-55d02b1e9296\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.765172 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e8928c5-5d77-4a39-a11f-55d02b1e9296-logs" (OuterVolumeSpecName: "logs") pod "7e8928c5-5d77-4a39-a11f-55d02b1e9296" (UID: "7e8928c5-5d77-4a39-a11f-55d02b1e9296"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.765497 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e8928c5-5d77-4a39-a11f-55d02b1e9296-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.772549 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8928c5-5d77-4a39-a11f-55d02b1e9296-kube-api-access-pmqfw" (OuterVolumeSpecName: "kube-api-access-pmqfw") pod "7e8928c5-5d77-4a39-a11f-55d02b1e9296" (UID: "7e8928c5-5d77-4a39-a11f-55d02b1e9296"). InnerVolumeSpecName "kube-api-access-pmqfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.788721 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.806731 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e8928c5-5d77-4a39-a11f-55d02b1e9296" (UID: "7e8928c5-5d77-4a39-a11f-55d02b1e9296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.807916 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-config-data" (OuterVolumeSpecName: "config-data") pod "7e8928c5-5d77-4a39-a11f-55d02b1e9296" (UID: "7e8928c5-5d77-4a39-a11f-55d02b1e9296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.866403 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-combined-ca-bundle\") pod \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.866492 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-config-data\") pod \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.866585 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385bfba8-cd50-4143-9e34-1245fbfe7d9b-logs\") pod \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.866661 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2nb\" (UniqueName: \"kubernetes.io/projected/385bfba8-cd50-4143-9e34-1245fbfe7d9b-kube-api-access-zk2nb\") pod \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\" (UID: \"385bfba8-cd50-4143-9e34-1245fbfe7d9b\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.867470 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385bfba8-cd50-4143-9e34-1245fbfe7d9b-logs" (OuterVolumeSpecName: "logs") pod "385bfba8-cd50-4143-9e34-1245fbfe7d9b" (UID: "385bfba8-cd50-4143-9e34-1245fbfe7d9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.868049 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/385bfba8-cd50-4143-9e34-1245fbfe7d9b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.868070 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.868079 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8928c5-5d77-4a39-a11f-55d02b1e9296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.868089 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqfw\" (UniqueName: \"kubernetes.io/projected/7e8928c5-5d77-4a39-a11f-55d02b1e9296-kube-api-access-pmqfw\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.873767 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385bfba8-cd50-4143-9e34-1245fbfe7d9b-kube-api-access-zk2nb" (OuterVolumeSpecName: "kube-api-access-zk2nb") pod "385bfba8-cd50-4143-9e34-1245fbfe7d9b" (UID: "385bfba8-cd50-4143-9e34-1245fbfe7d9b"). InnerVolumeSpecName "kube-api-access-zk2nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.895977 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-config-data" (OuterVolumeSpecName: "config-data") pod "385bfba8-cd50-4143-9e34-1245fbfe7d9b" (UID: "385bfba8-cd50-4143-9e34-1245fbfe7d9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.900567 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "385bfba8-cd50-4143-9e34-1245fbfe7d9b" (UID: "385bfba8-cd50-4143-9e34-1245fbfe7d9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.960454 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961085 4900 generic.go:334] "Generic (PLEG): container finished" podID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerID="01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12" exitCode=0 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961110 4900 generic.go:334] "Generic (PLEG): container finished" podID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerID="a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a" exitCode=143 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961148 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385bfba8-cd50-4143-9e34-1245fbfe7d9b","Type":"ContainerDied","Data":"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961173 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385bfba8-cd50-4143-9e34-1245fbfe7d9b","Type":"ContainerDied","Data":"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961182 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"385bfba8-cd50-4143-9e34-1245fbfe7d9b","Type":"ContainerDied","Data":"75de0147f1cff837a3d07fef56f04b9aa8aec8b937addd6880f9db6fcb11bb9e"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961197 4900 scope.go:117] "RemoveContainer" containerID="01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.961315 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.970045 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-combined-ca-bundle\") pod \"a4a0858a-5a98-480b-a879-ecebcccd694f\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.970132 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84bqh\" (UniqueName: \"kubernetes.io/projected/a4a0858a-5a98-480b-a879-ecebcccd694f-kube-api-access-84bqh\") pod \"a4a0858a-5a98-480b-a879-ecebcccd694f\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.970208 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-config-data\") pod \"a4a0858a-5a98-480b-a879-ecebcccd694f\" (UID: \"a4a0858a-5a98-480b-a879-ecebcccd694f\") " Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.970689 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk2nb\" (UniqueName: \"kubernetes.io/projected/385bfba8-cd50-4143-9e34-1245fbfe7d9b-kube-api-access-zk2nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.970735 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.970745 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385bfba8-cd50-4143-9e34-1245fbfe7d9b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.972925 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64ee25b4-c05d-4fa2-97fd-72687a003c57","Type":"ContainerStarted","Data":"f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.973052 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.974319 4900 generic.go:334] "Generic (PLEG): container finished" podID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerID="c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34" exitCode=0 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.974341 4900 generic.go:334] "Generic (PLEG): container finished" podID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerID="c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976" exitCode=143 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.976774 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.974370 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e8928c5-5d77-4a39-a11f-55d02b1e9296","Type":"ContainerDied","Data":"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.977240 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e8928c5-5d77-4a39-a11f-55d02b1e9296","Type":"ContainerDied","Data":"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.977258 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e8928c5-5d77-4a39-a11f-55d02b1e9296","Type":"ContainerDied","Data":"5a97a31bcc6864db74db3c0b49dc96f9013978daa65e7dc37962eb8562a2eb5d"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.983105 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.983665 4900 generic.go:334] "Generic (PLEG): container finished" podID="a4a0858a-5a98-480b-a879-ecebcccd694f" containerID="ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e" exitCode=0 Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.983721 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4a0858a-5a98-480b-a879-ecebcccd694f","Type":"ContainerDied","Data":"ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.983744 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4a0858a-5a98-480b-a879-ecebcccd694f","Type":"ContainerDied","Data":"3730cbda06234c73cce03f5bf559aacb9ec07b11feb4dc9424890010c91a7a17"} Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.986503 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a0858a-5a98-480b-a879-ecebcccd694f-kube-api-access-84bqh" (OuterVolumeSpecName: "kube-api-access-84bqh") pod "a4a0858a-5a98-480b-a879-ecebcccd694f" (UID: "a4a0858a-5a98-480b-a879-ecebcccd694f"). InnerVolumeSpecName "kube-api-access-84bqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:19 crc kubenswrapper[4900]: I1202 15:16:19.996123 4900 scope.go:117] "RemoveContainer" containerID="a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.001553 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.001531659 podStartE2EDuration="2.001531659s" podCreationTimestamp="2025-12-02 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:20.000778728 +0000 UTC m=+5625.416592589" watchObservedRunningTime="2025-12-02 15:16:20.001531659 +0000 UTC m=+5625.417345520" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.023713 4900 scope.go:117] "RemoveContainer" containerID="01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.024270 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12\": container with ID starting with 01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12 not found: ID does not exist" containerID="01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.024481 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12"} err="failed to get container status \"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12\": rpc error: code = NotFound desc = could not find container \"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12\": container with ID starting with 01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12 not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.024502 4900 scope.go:117] "RemoveContainer" containerID="a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.025079 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a\": container with ID starting with a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a not found: ID does not exist" containerID="a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.025257 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a"} err="failed to get container status \"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a\": rpc error: code = NotFound desc = could not find container \"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a\": container with ID starting with a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.025387 4900 scope.go:117] "RemoveContainer" containerID="01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.026009 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12"} err="failed to get container status \"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12\": rpc error: code = NotFound desc = could not find container \"01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12\": container with ID starting with 01a941e2c3365e5fae52a07e47fa37867b0b9c85c5c9aaeca36588e4ec430d12 not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.026049 4900 scope.go:117] "RemoveContainer" containerID="a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.026404 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4a0858a-5a98-480b-a879-ecebcccd694f" (UID: "a4a0858a-5a98-480b-a879-ecebcccd694f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.026451 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a"} err="failed to get container status \"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a\": rpc error: code = NotFound desc = could not find container \"a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a\": container with ID starting with a1060ae1c775a05030c5f171eb14bd4f4b03e356ef2fdb139e785cbc2c40f02a not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.026468 4900 scope.go:117] "RemoveContainer" containerID="c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.027222 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.027367 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-config-data" (OuterVolumeSpecName: "config-data") pod "a4a0858a-5a98-480b-a879-ecebcccd694f" (UID: "a4a0858a-5a98-480b-a879-ecebcccd694f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.038281 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.052842 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.066478 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.073965 4900 scope.go:117] "RemoveContainer" containerID="c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.075338 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.075430 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a0858a-5a98-480b-a879-ecebcccd694f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.075503 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84bqh\" (UniqueName: \"kubernetes.io/projected/a4a0858a-5a98-480b-a879-ecebcccd694f-kube-api-access-84bqh\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.093751 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.094321 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-log" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094342 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-log" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.094363 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebb257b-4ffb-40bd-a873-77699f11ee7f" containerName="nova-manage" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094371 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebb257b-4ffb-40bd-a873-77699f11ee7f" containerName="nova-manage" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.094382 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-log" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094389 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-log" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.094405 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-metadata" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094411 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-metadata" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.094480 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-api" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094488 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-api" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.094594 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a0858a-5a98-480b-a879-ecebcccd694f" containerName="nova-scheduler-scheduler" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094603 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a0858a-5a98-480b-a879-ecebcccd694f" containerName="nova-scheduler-scheduler" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094771 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-log" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094783 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-log" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094800 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cebb257b-4ffb-40bd-a873-77699f11ee7f" containerName="nova-manage" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094814 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a0858a-5a98-480b-a879-ecebcccd694f" containerName="nova-scheduler-scheduler" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094821 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" containerName="nova-api-api" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.094828 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" containerName="nova-metadata-metadata" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.095753 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.105193 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.108488 4900 scope.go:117] "RemoveContainer" containerID="c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.109207 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.109694 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34\": container with ID starting with c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34 not found: ID does not exist" containerID="c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.109770 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34"} err="failed to get container status \"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34\": rpc error: code = NotFound desc = could not find container \"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34\": container with ID starting with c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34 not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.109802 4900 scope.go:117] "RemoveContainer" containerID="c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.112730 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976\": container with ID starting with c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976 not found: ID does not exist" containerID="c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.112781 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976"} err="failed to get container status \"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976\": rpc error: code = NotFound desc = could not find container \"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976\": container with ID starting with c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976 not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.112810 4900 scope.go:117] "RemoveContainer" containerID="c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.113983 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34"} err="failed to get container status \"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34\": rpc error: code = NotFound desc = could not find container \"c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34\": container with ID starting with c4ab24589c2b8260be6ce7666b540892b59ae28f2f22b4bd1fac11a008f16e34 not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.114022 4900 scope.go:117] "RemoveContainer" containerID="c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.114283 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976"} err="failed to get container status \"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976\": rpc error: code = NotFound desc = could not find container \"c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976\": container with ID starting with c198dba81f71959ab8d970913f2c4d61fc88a1cc83e11d988f2339d94dde7976 not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.114301 4900 scope.go:117] "RemoveContainer" containerID="ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.117481 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.119055 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.123254 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.132723 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.150845 4900 scope.go:117] "RemoveContainer" containerID="ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e" Dec 02 15:16:20 crc kubenswrapper[4900]: E1202 15:16:20.151278 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e\": container with ID starting with ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e not found: ID does not exist" containerID="ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.151304 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e"} err="failed to get container status \"ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e\": rpc error: code = NotFound desc = could not find container \"ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e\": container with ID starting with ab4f1938ac96821397051ef9ffd60563bae3ed2aa40071db894a771c9989663e not found: ID does not exist" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.176841 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.176898 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2v4r\" (UniqueName: \"kubernetes.io/projected/65093d92-f274-4287-8750-997ddc8c0c44-kube-api-access-w2v4r\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.176945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-logs\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.176962 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.176981 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-config-data\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.177179 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76kw\" (UniqueName: \"kubernetes.io/projected/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-kube-api-access-v76kw\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.177629 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-config-data\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.177720 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65093d92-f274-4287-8750-997ddc8c0c44-logs\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279310 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76kw\" (UniqueName: \"kubernetes.io/projected/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-kube-api-access-v76kw\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279698 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-config-data\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279733 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65093d92-f274-4287-8750-997ddc8c0c44-logs\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279774 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279836 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2v4r\" (UniqueName: \"kubernetes.io/projected/65093d92-f274-4287-8750-997ddc8c0c44-kube-api-access-w2v4r\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279875 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-logs\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279892 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.279905 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-config-data\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.281005 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-logs\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.281341 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65093d92-f274-4287-8750-997ddc8c0c44-logs\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.295578 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.295670 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-config-data\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.296385 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76kw\" (UniqueName: \"kubernetes.io/projected/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-kube-api-access-v76kw\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.298288 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.299012 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-config-data\") pod \"nova-api-0\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.300131 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2v4r\" (UniqueName: \"kubernetes.io/projected/65093d92-f274-4287-8750-997ddc8c0c44-kube-api-access-w2v4r\") pod \"nova-metadata-0\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.409904 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.419481 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.430682 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.430752 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.432010 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.435458 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.443789 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.452536 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.484943 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-config-data\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.485136 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.485727 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvlm6\" (UniqueName: \"kubernetes.io/projected/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-kube-api-access-gvlm6\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.587960 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvlm6\" (UniqueName: \"kubernetes.io/projected/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-kube-api-access-gvlm6\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.588027 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-config-data\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.588100 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.594523 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-config-data\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.594961 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.603846 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvlm6\" (UniqueName: \"kubernetes.io/projected/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-kube-api-access-gvlm6\") pod \"nova-scheduler-0\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.857058 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.930152 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385bfba8-cd50-4143-9e34-1245fbfe7d9b" path="/var/lib/kubelet/pods/385bfba8-cd50-4143-9e34-1245fbfe7d9b/volumes" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.931150 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8928c5-5d77-4a39-a11f-55d02b1e9296" path="/var/lib/kubelet/pods/7e8928c5-5d77-4a39-a11f-55d02b1e9296/volumes" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.932124 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a0858a-5a98-480b-a879-ecebcccd694f" path="/var/lib/kubelet/pods/a4a0858a-5a98-480b-a879-ecebcccd694f/volumes" Dec 02 15:16:20 crc kubenswrapper[4900]: I1202 15:16:20.951668 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:20 crc kubenswrapper[4900]: W1202 15:16:20.954024 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce22131_047f_4f2f_b7df_cf3e9d6317d9.slice/crio-bd9e8681df1a9e7cea331540459a5c2889ccdf571e11728e6cafdce2065ec995 WatchSource:0}: Error finding container bd9e8681df1a9e7cea331540459a5c2889ccdf571e11728e6cafdce2065ec995: Status 404 returned error can't find the container with id bd9e8681df1a9e7cea331540459a5c2889ccdf571e11728e6cafdce2065ec995 Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.016314 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ce22131-047f-4f2f-b7df-cf3e9d6317d9","Type":"ContainerStarted","Data":"bd9e8681df1a9e7cea331540459a5c2889ccdf571e11728e6cafdce2065ec995"} Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.035249 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:21 crc kubenswrapper[4900]: W1202 15:16:21.050756 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65093d92_f274_4287_8750_997ddc8c0c44.slice/crio-71f350737f3a6936608099b64b04592c73e04a1119c87866d621b8c173a651a8 WatchSource:0}: Error finding container 71f350737f3a6936608099b64b04592c73e04a1119c87866d621b8c173a651a8: Status 404 returned error can't find the container with id 71f350737f3a6936608099b64b04592c73e04a1119c87866d621b8c173a651a8 Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.458286 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.468001 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.474142 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.596364 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb88cb577-4k7jc"] Dec 02 15:16:21 crc kubenswrapper[4900]: I1202 15:16:21.599623 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerName="dnsmasq-dns" containerID="cri-o://5413cd290408b0f577d82ecf1fd06b4d03efbac3b009f50226dae79039809983" gracePeriod=10 Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.032972 4900 generic.go:334] "Generic (PLEG): container finished" podID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerID="5413cd290408b0f577d82ecf1fd06b4d03efbac3b009f50226dae79039809983" exitCode=0 Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.033045 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" event={"ID":"b898aeaf-d484-479f-bc33-3d692adddfeb","Type":"ContainerDied","Data":"5413cd290408b0f577d82ecf1fd06b4d03efbac3b009f50226dae79039809983"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.033323 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" event={"ID":"b898aeaf-d484-479f-bc33-3d692adddfeb","Type":"ContainerDied","Data":"306f1f558eb3eec2e85710c562b58ee5d102aa8ee043f0785aff461f72c72710"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.033337 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="306f1f558eb3eec2e85710c562b58ee5d102aa8ee043f0785aff461f72c72710" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.035394 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ce22131-047f-4f2f-b7df-cf3e9d6317d9","Type":"ContainerStarted","Data":"5d1948550a671fea2b7c991d1ec4354a486c65958a4bbc6c1c73f2017355d946"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.035412 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ce22131-047f-4f2f-b7df-cf3e9d6317d9","Type":"ContainerStarted","Data":"1def316a32da1f7c02a8b1e5e09fcc27eb981020bb33b57fbe9967238404ecb1"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.038280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65093d92-f274-4287-8750-997ddc8c0c44","Type":"ContainerStarted","Data":"92fc1eb619566df4a8e982e00dd38aa68a4f78c7c3c4943d6d0c7af1f41fa42c"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.038314 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65093d92-f274-4287-8750-997ddc8c0c44","Type":"ContainerStarted","Data":"f5f086bc8952a5e310db423ede7dfb6e5f3ac3364164dd0c624e8bdbee18736d"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.038324 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65093d92-f274-4287-8750-997ddc8c0c44","Type":"ContainerStarted","Data":"71f350737f3a6936608099b64b04592c73e04a1119c87866d621b8c173a651a8"} Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.047218 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.061888 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.061869381 podStartE2EDuration="2.061869381s" podCreationTimestamp="2025-12-02 15:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:22.055383079 +0000 UTC m=+5627.471196930" watchObservedRunningTime="2025-12-02 15:16:22.061869381 +0000 UTC m=+5627.477683232" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.087208 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.087184371 podStartE2EDuration="2.087184371s" podCreationTimestamp="2025-12-02 15:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:22.080133133 +0000 UTC m=+5627.495946984" watchObservedRunningTime="2025-12-02 15:16:22.087184371 +0000 UTC m=+5627.502998222" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.098853 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.117916 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.233291 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgm6\" (UniqueName: \"kubernetes.io/projected/b898aeaf-d484-479f-bc33-3d692adddfeb-kube-api-access-pkgm6\") pod \"b898aeaf-d484-479f-bc33-3d692adddfeb\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.233344 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-dns-svc\") pod \"b898aeaf-d484-479f-bc33-3d692adddfeb\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.233372 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-sb\") pod \"b898aeaf-d484-479f-bc33-3d692adddfeb\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.233471 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-nb\") pod \"b898aeaf-d484-479f-bc33-3d692adddfeb\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.233996 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-config\") pod \"b898aeaf-d484-479f-bc33-3d692adddfeb\" (UID: \"b898aeaf-d484-479f-bc33-3d692adddfeb\") " Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.237833 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b898aeaf-d484-479f-bc33-3d692adddfeb-kube-api-access-pkgm6" (OuterVolumeSpecName: "kube-api-access-pkgm6") pod "b898aeaf-d484-479f-bc33-3d692adddfeb" (UID: "b898aeaf-d484-479f-bc33-3d692adddfeb"). InnerVolumeSpecName "kube-api-access-pkgm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.289777 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b898aeaf-d484-479f-bc33-3d692adddfeb" (UID: "b898aeaf-d484-479f-bc33-3d692adddfeb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.299521 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b898aeaf-d484-479f-bc33-3d692adddfeb" (UID: "b898aeaf-d484-479f-bc33-3d692adddfeb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.300460 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-config" (OuterVolumeSpecName: "config") pod "b898aeaf-d484-479f-bc33-3d692adddfeb" (UID: "b898aeaf-d484-479f-bc33-3d692adddfeb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.308655 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b898aeaf-d484-479f-bc33-3d692adddfeb" (UID: "b898aeaf-d484-479f-bc33-3d692adddfeb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.335237 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgm6\" (UniqueName: \"kubernetes.io/projected/b898aeaf-d484-479f-bc33-3d692adddfeb-kube-api-access-pkgm6\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.335266 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.335277 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.335285 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:22 crc kubenswrapper[4900]: I1202 15:16:22.335295 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b898aeaf-d484-479f-bc33-3d692adddfeb-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:23 crc kubenswrapper[4900]: I1202 15:16:23.071357 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0be98e5c-8c57-4d7e-b65e-16a67ec79b95","Type":"ContainerStarted","Data":"3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c"} Dec 02 15:16:23 crc kubenswrapper[4900]: I1202 15:16:23.071409 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0be98e5c-8c57-4d7e-b65e-16a67ec79b95","Type":"ContainerStarted","Data":"15295e575c9cd19dd52a8fa0d1edcdfa48285d8b533ef1561337533153cdc731"} Dec 02 15:16:23 crc kubenswrapper[4900]: I1202 15:16:23.071484 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb88cb577-4k7jc" Dec 02 15:16:23 crc kubenswrapper[4900]: I1202 15:16:23.096551 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb88cb577-4k7jc"] Dec 02 15:16:23 crc kubenswrapper[4900]: I1202 15:16:23.108345 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb88cb577-4k7jc"] Dec 02 15:16:23 crc kubenswrapper[4900]: I1202 15:16:23.109092 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.10907387 podStartE2EDuration="3.10907387s" podCreationTimestamp="2025-12-02 15:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:23.108423431 +0000 UTC m=+5628.524237292" watchObservedRunningTime="2025-12-02 15:16:23.10907387 +0000 UTC m=+5628.524887721" Dec 02 15:16:24 crc kubenswrapper[4900]: I1202 15:16:24.927514 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" path="/var/lib/kubelet/pods/b898aeaf-d484-479f-bc33-3d692adddfeb/volumes" Dec 02 15:16:25 crc kubenswrapper[4900]: I1202 15:16:25.444799 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:16:25 crc kubenswrapper[4900]: I1202 15:16:25.444868 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:16:25 crc kubenswrapper[4900]: I1202 15:16:25.858804 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.442576 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.924705 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dbh6d"] Dec 02 15:16:28 crc kubenswrapper[4900]: E1202 15:16:28.925250 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerName="dnsmasq-dns" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.925317 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerName="dnsmasq-dns" Dec 02 15:16:28 crc kubenswrapper[4900]: E1202 15:16:28.925389 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerName="init" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.925464 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerName="init" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.925729 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b898aeaf-d484-479f-bc33-3d692adddfeb" containerName="dnsmasq-dns" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.926396 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.930145 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.930270 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 02 15:16:28 crc kubenswrapper[4900]: I1202 15:16:28.939588 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dbh6d"] Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.080301 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-config-data\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.080670 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt9jz\" (UniqueName: \"kubernetes.io/projected/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-kube-api-access-gt9jz\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.080725 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-scripts\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.080791 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.182692 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt9jz\" (UniqueName: \"kubernetes.io/projected/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-kube-api-access-gt9jz\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.183041 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-scripts\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.183170 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.183282 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-config-data\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.189732 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-scripts\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.190528 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-config-data\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.190903 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.207121 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt9jz\" (UniqueName: \"kubernetes.io/projected/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-kube-api-access-gt9jz\") pod \"nova-cell1-cell-mapping-dbh6d\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.261434 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:29 crc kubenswrapper[4900]: I1202 15:16:29.790022 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dbh6d"] Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.168828 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dbh6d" event={"ID":"ee15c717-27cd-48d5-b6d4-6be6043dbb1c","Type":"ContainerStarted","Data":"9a32dbc22f87b698cf0d79d9c763e83142f6f8c7381a3f95e9b9ce88a1e04db3"} Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.169448 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dbh6d" event={"ID":"ee15c717-27cd-48d5-b6d4-6be6043dbb1c","Type":"ContainerStarted","Data":"23026b7c16200e86fe22a5849f237cc159c92d2ecb9f922c0ddc2100de7ad54a"} Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.198714 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dbh6d" podStartSLOduration=2.198688651 podStartE2EDuration="2.198688651s" podCreationTimestamp="2025-12-02 15:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:30.183387982 +0000 UTC m=+5635.599201833" watchObservedRunningTime="2025-12-02 15:16:30.198688651 +0000 UTC m=+5635.614502542" Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.431368 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.431446 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.445197 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.445271 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.858638 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 15:16:30 crc kubenswrapper[4900]: I1202 15:16:30.939294 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 15:16:31 crc kubenswrapper[4900]: I1202 15:16:31.241264 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 15:16:31 crc kubenswrapper[4900]: I1202 15:16:31.513883 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:31 crc kubenswrapper[4900]: I1202 15:16:31.595910 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.67:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:31 crc kubenswrapper[4900]: I1202 15:16:31.596012 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:31 crc kubenswrapper[4900]: I1202 15:16:31.596117 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.67:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:35 crc kubenswrapper[4900]: I1202 15:16:35.222157 4900 generic.go:334] "Generic (PLEG): container finished" podID="ee15c717-27cd-48d5-b6d4-6be6043dbb1c" containerID="9a32dbc22f87b698cf0d79d9c763e83142f6f8c7381a3f95e9b9ce88a1e04db3" exitCode=0 Dec 02 15:16:35 crc kubenswrapper[4900]: I1202 15:16:35.222267 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dbh6d" event={"ID":"ee15c717-27cd-48d5-b6d4-6be6043dbb1c","Type":"ContainerDied","Data":"9a32dbc22f87b698cf0d79d9c763e83142f6f8c7381a3f95e9b9ce88a1e04db3"} Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.658360 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.722059 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt9jz\" (UniqueName: \"kubernetes.io/projected/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-kube-api-access-gt9jz\") pod \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.722271 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-combined-ca-bundle\") pod \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.722326 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-config-data\") pod \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.722403 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-scripts\") pod \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\" (UID: \"ee15c717-27cd-48d5-b6d4-6be6043dbb1c\") " Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.729450 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-scripts" (OuterVolumeSpecName: "scripts") pod "ee15c717-27cd-48d5-b6d4-6be6043dbb1c" (UID: "ee15c717-27cd-48d5-b6d4-6be6043dbb1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.731034 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-kube-api-access-gt9jz" (OuterVolumeSpecName: "kube-api-access-gt9jz") pod "ee15c717-27cd-48d5-b6d4-6be6043dbb1c" (UID: "ee15c717-27cd-48d5-b6d4-6be6043dbb1c"). InnerVolumeSpecName "kube-api-access-gt9jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.757512 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-config-data" (OuterVolumeSpecName: "config-data") pod "ee15c717-27cd-48d5-b6d4-6be6043dbb1c" (UID: "ee15c717-27cd-48d5-b6d4-6be6043dbb1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.758931 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee15c717-27cd-48d5-b6d4-6be6043dbb1c" (UID: "ee15c717-27cd-48d5-b6d4-6be6043dbb1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.825475 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt9jz\" (UniqueName: \"kubernetes.io/projected/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-kube-api-access-gt9jz\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.825539 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.825555 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:36 crc kubenswrapper[4900]: I1202 15:16:36.825569 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee15c717-27cd-48d5-b6d4-6be6043dbb1c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.251512 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dbh6d" event={"ID":"ee15c717-27cd-48d5-b6d4-6be6043dbb1c","Type":"ContainerDied","Data":"23026b7c16200e86fe22a5849f237cc159c92d2ecb9f922c0ddc2100de7ad54a"} Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.251565 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23026b7c16200e86fe22a5849f237cc159c92d2ecb9f922c0ddc2100de7ad54a" Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.251629 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dbh6d" Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.464917 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.465137 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-log" containerID="cri-o://1def316a32da1f7c02a8b1e5e09fcc27eb981020bb33b57fbe9967238404ecb1" gracePeriod=30 Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.465223 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-api" containerID="cri-o://5d1948550a671fea2b7c991d1ec4354a486c65958a4bbc6c1c73f2017355d946" gracePeriod=30 Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.498630 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.505821 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0be98e5c-8c57-4d7e-b65e-16a67ec79b95" containerName="nova-scheduler-scheduler" containerID="cri-o://3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c" gracePeriod=30 Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.515689 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.516023 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-log" containerID="cri-o://f5f086bc8952a5e310db423ede7dfb6e5f3ac3364164dd0c624e8bdbee18736d" gracePeriod=30 Dec 02 15:16:37 crc kubenswrapper[4900]: I1202 15:16:37.516224 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-metadata" containerID="cri-o://92fc1eb619566df4a8e982e00dd38aa68a4f78c7c3c4943d6d0c7af1f41fa42c" gracePeriod=30 Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.260477 4900 generic.go:334] "Generic (PLEG): container finished" podID="65093d92-f274-4287-8750-997ddc8c0c44" containerID="f5f086bc8952a5e310db423ede7dfb6e5f3ac3364164dd0c624e8bdbee18736d" exitCode=143 Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.260567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65093d92-f274-4287-8750-997ddc8c0c44","Type":"ContainerDied","Data":"f5f086bc8952a5e310db423ede7dfb6e5f3ac3364164dd0c624e8bdbee18736d"} Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.262528 4900 generic.go:334] "Generic (PLEG): container finished" podID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerID="1def316a32da1f7c02a8b1e5e09fcc27eb981020bb33b57fbe9967238404ecb1" exitCode=143 Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.262559 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ce22131-047f-4f2f-b7df-cf3e9d6317d9","Type":"ContainerDied","Data":"1def316a32da1f7c02a8b1e5e09fcc27eb981020bb33b57fbe9967238404ecb1"} Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.804462 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.863627 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-config-data\") pod \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.863700 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-combined-ca-bundle\") pod \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.863751 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvlm6\" (UniqueName: \"kubernetes.io/projected/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-kube-api-access-gvlm6\") pod \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\" (UID: \"0be98e5c-8c57-4d7e-b65e-16a67ec79b95\") " Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.881919 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-kube-api-access-gvlm6" (OuterVolumeSpecName: "kube-api-access-gvlm6") pod "0be98e5c-8c57-4d7e-b65e-16a67ec79b95" (UID: "0be98e5c-8c57-4d7e-b65e-16a67ec79b95"). InnerVolumeSpecName "kube-api-access-gvlm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.890138 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-config-data" (OuterVolumeSpecName: "config-data") pod "0be98e5c-8c57-4d7e-b65e-16a67ec79b95" (UID: "0be98e5c-8c57-4d7e-b65e-16a67ec79b95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.898366 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0be98e5c-8c57-4d7e-b65e-16a67ec79b95" (UID: "0be98e5c-8c57-4d7e-b65e-16a67ec79b95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.965281 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.965309 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:38 crc kubenswrapper[4900]: I1202 15:16:38.965319 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvlm6\" (UniqueName: \"kubernetes.io/projected/0be98e5c-8c57-4d7e-b65e-16a67ec79b95-kube-api-access-gvlm6\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.276334 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be98e5c-8c57-4d7e-b65e-16a67ec79b95" containerID="3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c" exitCode=0 Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.276407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0be98e5c-8c57-4d7e-b65e-16a67ec79b95","Type":"ContainerDied","Data":"3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c"} Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.276477 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0be98e5c-8c57-4d7e-b65e-16a67ec79b95","Type":"ContainerDied","Data":"15295e575c9cd19dd52a8fa0d1edcdfa48285d8b533ef1561337533153cdc731"} Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.276514 4900 scope.go:117] "RemoveContainer" containerID="3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.278342 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.306959 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.312985 4900 scope.go:117] "RemoveContainer" containerID="3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c" Dec 02 15:16:39 crc kubenswrapper[4900]: E1202 15:16:39.313576 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c\": container with ID starting with 3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c not found: ID does not exist" containerID="3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.313624 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c"} err="failed to get container status \"3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c\": rpc error: code = NotFound desc = could not find container \"3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c\": container with ID starting with 3020af19d04361693147dd90beec2ab8be23266c1253cbf49b568d894040846c not found: ID does not exist" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.324358 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.341046 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:39 crc kubenswrapper[4900]: E1202 15:16:39.341719 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be98e5c-8c57-4d7e-b65e-16a67ec79b95" containerName="nova-scheduler-scheduler" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.341756 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be98e5c-8c57-4d7e-b65e-16a67ec79b95" containerName="nova-scheduler-scheduler" Dec 02 15:16:39 crc kubenswrapper[4900]: E1202 15:16:39.341787 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee15c717-27cd-48d5-b6d4-6be6043dbb1c" containerName="nova-manage" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.341805 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee15c717-27cd-48d5-b6d4-6be6043dbb1c" containerName="nova-manage" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.342170 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee15c717-27cd-48d5-b6d4-6be6043dbb1c" containerName="nova-manage" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.342225 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be98e5c-8c57-4d7e-b65e-16a67ec79b95" containerName="nova-scheduler-scheduler" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.352605 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.352780 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.355595 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.371881 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.371983 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-config-data\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.372086 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx869\" (UniqueName: \"kubernetes.io/projected/402900ed-44f4-42fe-b2ff-1fb701a09cf2-kube-api-access-nx869\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.473308 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.474354 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-config-data\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.474492 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx869\" (UniqueName: \"kubernetes.io/projected/402900ed-44f4-42fe-b2ff-1fb701a09cf2-kube-api-access-nx869\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.479192 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.479519 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-config-data\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.496849 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx869\" (UniqueName: \"kubernetes.io/projected/402900ed-44f4-42fe-b2ff-1fb701a09cf2-kube-api-access-nx869\") pod \"nova-scheduler-0\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " pod="openstack/nova-scheduler-0" Dec 02 15:16:39 crc kubenswrapper[4900]: I1202 15:16:39.673517 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:16:40 crc kubenswrapper[4900]: I1202 15:16:40.195227 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:16:40 crc kubenswrapper[4900]: W1202 15:16:40.198081 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402900ed_44f4_42fe_b2ff_1fb701a09cf2.slice/crio-99a94f21f9b45194ce04a23e8f556b55d8f4b5debcbdbb82374833b671a5b183 WatchSource:0}: Error finding container 99a94f21f9b45194ce04a23e8f556b55d8f4b5debcbdbb82374833b671a5b183: Status 404 returned error can't find the container with id 99a94f21f9b45194ce04a23e8f556b55d8f4b5debcbdbb82374833b671a5b183 Dec 02 15:16:40 crc kubenswrapper[4900]: I1202 15:16:40.293223 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"402900ed-44f4-42fe-b2ff-1fb701a09cf2","Type":"ContainerStarted","Data":"99a94f21f9b45194ce04a23e8f556b55d8f4b5debcbdbb82374833b671a5b183"} Dec 02 15:16:40 crc kubenswrapper[4900]: I1202 15:16:40.926206 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be98e5c-8c57-4d7e-b65e-16a67ec79b95" path="/var/lib/kubelet/pods/0be98e5c-8c57-4d7e-b65e-16a67ec79b95/volumes" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.311149 4900 generic.go:334] "Generic (PLEG): container finished" podID="65093d92-f274-4287-8750-997ddc8c0c44" containerID="92fc1eb619566df4a8e982e00dd38aa68a4f78c7c3c4943d6d0c7af1f41fa42c" exitCode=0 Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.311216 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65093d92-f274-4287-8750-997ddc8c0c44","Type":"ContainerDied","Data":"92fc1eb619566df4a8e982e00dd38aa68a4f78c7c3c4943d6d0c7af1f41fa42c"} Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.314858 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"402900ed-44f4-42fe-b2ff-1fb701a09cf2","Type":"ContainerStarted","Data":"da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047"} Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.317245 4900 generic.go:334] "Generic (PLEG): container finished" podID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerID="5d1948550a671fea2b7c991d1ec4354a486c65958a4bbc6c1c73f2017355d946" exitCode=0 Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.317281 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ce22131-047f-4f2f-b7df-cf3e9d6317d9","Type":"ContainerDied","Data":"5d1948550a671fea2b7c991d1ec4354a486c65958a4bbc6c1c73f2017355d946"} Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.346090 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.34607157 podStartE2EDuration="2.34607157s" podCreationTimestamp="2025-12-02 15:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:41.337459279 +0000 UTC m=+5646.753273160" watchObservedRunningTime="2025-12-02 15:16:41.34607157 +0000 UTC m=+5646.761885421" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.577570 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.582749 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643230 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-combined-ca-bundle\") pod \"65093d92-f274-4287-8750-997ddc8c0c44\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643329 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-config-data\") pod \"65093d92-f274-4287-8750-997ddc8c0c44\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643369 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v76kw\" (UniqueName: \"kubernetes.io/projected/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-kube-api-access-v76kw\") pod \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643408 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-combined-ca-bundle\") pod \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643433 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65093d92-f274-4287-8750-997ddc8c0c44-logs\") pod \"65093d92-f274-4287-8750-997ddc8c0c44\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643565 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-logs\") pod \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643610 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2v4r\" (UniqueName: \"kubernetes.io/projected/65093d92-f274-4287-8750-997ddc8c0c44-kube-api-access-w2v4r\") pod \"65093d92-f274-4287-8750-997ddc8c0c44\" (UID: \"65093d92-f274-4287-8750-997ddc8c0c44\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.643715 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-config-data\") pod \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\" (UID: \"2ce22131-047f-4f2f-b7df-cf3e9d6317d9\") " Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.645758 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65093d92-f274-4287-8750-997ddc8c0c44-logs" (OuterVolumeSpecName: "logs") pod "65093d92-f274-4287-8750-997ddc8c0c44" (UID: "65093d92-f274-4287-8750-997ddc8c0c44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.646273 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-logs" (OuterVolumeSpecName: "logs") pod "2ce22131-047f-4f2f-b7df-cf3e9d6317d9" (UID: "2ce22131-047f-4f2f-b7df-cf3e9d6317d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.648377 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65093d92-f274-4287-8750-997ddc8c0c44-kube-api-access-w2v4r" (OuterVolumeSpecName: "kube-api-access-w2v4r") pod "65093d92-f274-4287-8750-997ddc8c0c44" (UID: "65093d92-f274-4287-8750-997ddc8c0c44"). InnerVolumeSpecName "kube-api-access-w2v4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.657247 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-kube-api-access-v76kw" (OuterVolumeSpecName: "kube-api-access-v76kw") pod "2ce22131-047f-4f2f-b7df-cf3e9d6317d9" (UID: "2ce22131-047f-4f2f-b7df-cf3e9d6317d9"). InnerVolumeSpecName "kube-api-access-v76kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.681589 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-config-data" (OuterVolumeSpecName: "config-data") pod "65093d92-f274-4287-8750-997ddc8c0c44" (UID: "65093d92-f274-4287-8750-997ddc8c0c44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.681810 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ce22131-047f-4f2f-b7df-cf3e9d6317d9" (UID: "2ce22131-047f-4f2f-b7df-cf3e9d6317d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.688573 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65093d92-f274-4287-8750-997ddc8c0c44" (UID: "65093d92-f274-4287-8750-997ddc8c0c44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.691990 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-config-data" (OuterVolumeSpecName: "config-data") pod "2ce22131-047f-4f2f-b7df-cf3e9d6317d9" (UID: "2ce22131-047f-4f2f-b7df-cf3e9d6317d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745586 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745619 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65093d92-f274-4287-8750-997ddc8c0c44-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745632 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v76kw\" (UniqueName: \"kubernetes.io/projected/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-kube-api-access-v76kw\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745662 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745670 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65093d92-f274-4287-8750-997ddc8c0c44-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745679 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745688 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2v4r\" (UniqueName: \"kubernetes.io/projected/65093d92-f274-4287-8750-997ddc8c0c44-kube-api-access-w2v4r\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:41 crc kubenswrapper[4900]: I1202 15:16:41.745696 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce22131-047f-4f2f-b7df-cf3e9d6317d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.330434 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.330429 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2ce22131-047f-4f2f-b7df-cf3e9d6317d9","Type":"ContainerDied","Data":"bd9e8681df1a9e7cea331540459a5c2889ccdf571e11728e6cafdce2065ec995"} Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.331786 4900 scope.go:117] "RemoveContainer" containerID="5d1948550a671fea2b7c991d1ec4354a486c65958a4bbc6c1c73f2017355d946" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.335542 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65093d92-f274-4287-8750-997ddc8c0c44","Type":"ContainerDied","Data":"71f350737f3a6936608099b64b04592c73e04a1119c87866d621b8c173a651a8"} Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.335575 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.361730 4900 scope.go:117] "RemoveContainer" containerID="1def316a32da1f7c02a8b1e5e09fcc27eb981020bb33b57fbe9967238404ecb1" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.380518 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.393434 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.399094 4900 scope.go:117] "RemoveContainer" containerID="92fc1eb619566df4a8e982e00dd38aa68a4f78c7c3c4943d6d0c7af1f41fa42c" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.417706 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.434416 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.436376 4900 scope.go:117] "RemoveContainer" containerID="f5f086bc8952a5e310db423ede7dfb6e5f3ac3364164dd0c624e8bdbee18736d" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.488690 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: E1202 15:16:42.489720 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-log" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.489752 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-log" Dec 02 15:16:42 crc kubenswrapper[4900]: E1202 15:16:42.489770 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-log" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.489779 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-log" Dec 02 15:16:42 crc kubenswrapper[4900]: E1202 15:16:42.489807 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-api" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.489816 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-api" Dec 02 15:16:42 crc kubenswrapper[4900]: E1202 15:16:42.489849 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-metadata" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.489858 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-metadata" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.490342 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-log" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.490369 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-api" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.490390 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" containerName="nova-api-log" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.490410 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="65093d92-f274-4287-8750-997ddc8c0c44" containerName="nova-metadata-metadata" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.495248 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.497995 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.502086 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.505150 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.507456 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.510157 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.519322 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572301 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-config-data\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572350 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-config-data\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572373 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-logs\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572463 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnwj\" (UniqueName: \"kubernetes.io/projected/94e1f486-c3e5-420a-b8af-de18cb2b73b2-kube-api-access-hpnwj\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572487 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572504 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e1f486-c3e5-420a-b8af-de18cb2b73b2-logs\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572534 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46nj7\" (UniqueName: \"kubernetes.io/projected/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-kube-api-access-46nj7\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.572619 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675086 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675178 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-config-data\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675226 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-config-data\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675246 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-logs\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675299 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnwj\" (UniqueName: \"kubernetes.io/projected/94e1f486-c3e5-420a-b8af-de18cb2b73b2-kube-api-access-hpnwj\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675320 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675337 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e1f486-c3e5-420a-b8af-de18cb2b73b2-logs\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.675376 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46nj7\" (UniqueName: \"kubernetes.io/projected/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-kube-api-access-46nj7\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.676139 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-logs\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.676194 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e1f486-c3e5-420a-b8af-de18cb2b73b2-logs\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.678926 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-config-data\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.679598 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.697291 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-config-data\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.697586 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.699244 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnwj\" (UniqueName: \"kubernetes.io/projected/94e1f486-c3e5-420a-b8af-de18cb2b73b2-kube-api-access-hpnwj\") pod \"nova-metadata-0\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.701125 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46nj7\" (UniqueName: \"kubernetes.io/projected/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-kube-api-access-46nj7\") pod \"nova-api-0\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.821190 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.831279 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.927476 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce22131-047f-4f2f-b7df-cf3e9d6317d9" path="/var/lib/kubelet/pods/2ce22131-047f-4f2f-b7df-cf3e9d6317d9/volumes" Dec 02 15:16:42 crc kubenswrapper[4900]: I1202 15:16:42.929418 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65093d92-f274-4287-8750-997ddc8c0c44" path="/var/lib/kubelet/pods/65093d92-f274-4287-8750-997ddc8c0c44/volumes" Dec 02 15:16:43 crc kubenswrapper[4900]: I1202 15:16:43.099047 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:16:43 crc kubenswrapper[4900]: I1202 15:16:43.349194 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94e1f486-c3e5-420a-b8af-de18cb2b73b2","Type":"ContainerStarted","Data":"d65ba649dd817633b3a7fa3059bf6940eb40dce9e7b318ec4d2cef1b9b4f9707"} Dec 02 15:16:43 crc kubenswrapper[4900]: I1202 15:16:43.349240 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94e1f486-c3e5-420a-b8af-de18cb2b73b2","Type":"ContainerStarted","Data":"114b880da7695babf7bef4089309d793549c3f6ebad12af50e74cd49d69bcc86"} Dec 02 15:16:43 crc kubenswrapper[4900]: I1202 15:16:43.371289 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:16:43 crc kubenswrapper[4900]: W1202 15:16:43.381298 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a2c9ef_380a_4ad0_8756_95fe3df13d3d.slice/crio-52c923a6d7824423e6043d7bc2310d3ca5761a8c29db8b4cc58cd853b29067e1 WatchSource:0}: Error finding container 52c923a6d7824423e6043d7bc2310d3ca5761a8c29db8b4cc58cd853b29067e1: Status 404 returned error can't find the container with id 52c923a6d7824423e6043d7bc2310d3ca5761a8c29db8b4cc58cd853b29067e1 Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.365534 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24a2c9ef-380a-4ad0-8756-95fe3df13d3d","Type":"ContainerStarted","Data":"425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98"} Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.366076 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24a2c9ef-380a-4ad0-8756-95fe3df13d3d","Type":"ContainerStarted","Data":"1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58"} Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.366116 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24a2c9ef-380a-4ad0-8756-95fe3df13d3d","Type":"ContainerStarted","Data":"52c923a6d7824423e6043d7bc2310d3ca5761a8c29db8b4cc58cd853b29067e1"} Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.368634 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94e1f486-c3e5-420a-b8af-de18cb2b73b2","Type":"ContainerStarted","Data":"b02d4227a7252caf3a8275e2ce467d29880ce1596272424efd5faf72769e2538"} Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.399693 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.399671956 podStartE2EDuration="2.399671956s" podCreationTimestamp="2025-12-02 15:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:44.395348695 +0000 UTC m=+5649.811162566" watchObservedRunningTime="2025-12-02 15:16:44.399671956 +0000 UTC m=+5649.815485827" Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.441072 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.441042547 podStartE2EDuration="2.441042547s" podCreationTimestamp="2025-12-02 15:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:44.422687612 +0000 UTC m=+5649.838501553" watchObservedRunningTime="2025-12-02 15:16:44.441042547 +0000 UTC m=+5649.856856428" Dec 02 15:16:44 crc kubenswrapper[4900]: I1202 15:16:44.674421 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 15:16:45 crc kubenswrapper[4900]: I1202 15:16:45.116779 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:16:45 crc kubenswrapper[4900]: I1202 15:16:45.116876 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:16:47 crc kubenswrapper[4900]: I1202 15:16:47.832408 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:16:47 crc kubenswrapper[4900]: I1202 15:16:47.832880 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:16:49 crc kubenswrapper[4900]: I1202 15:16:49.675223 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 15:16:49 crc kubenswrapper[4900]: I1202 15:16:49.718944 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 15:16:50 crc kubenswrapper[4900]: I1202 15:16:50.484362 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 15:16:52 crc kubenswrapper[4900]: I1202 15:16:52.821716 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 15:16:52 crc kubenswrapper[4900]: I1202 15:16:52.822177 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 15:16:52 crc kubenswrapper[4900]: I1202 15:16:52.832166 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 15:16:52 crc kubenswrapper[4900]: I1202 15:16:52.832271 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 15:16:53 crc kubenswrapper[4900]: I1202 15:16:53.945057 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:53 crc kubenswrapper[4900]: I1202 15:16:53.985809 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.71:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:53 crc kubenswrapper[4900]: I1202 15:16:53.985841 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:16:53 crc kubenswrapper[4900]: I1202 15:16:53.985848 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.71:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:17:01 crc kubenswrapper[4900]: I1202 15:17:01.474791 4900 scope.go:117] "RemoveContainer" containerID="b121a7162167d88698bfd3ef64bd7a928f439165fbd7e79dbb68c089b6d3e1ad" Dec 02 15:17:01 crc kubenswrapper[4900]: I1202 15:17:01.539898 4900 scope.go:117] "RemoveContainer" containerID="f83da3bfe3e054a647e28fe56c91482130ac54310196fa2fb2e1e07e5e3b0ef7" Dec 02 15:17:01 crc kubenswrapper[4900]: I1202 15:17:01.580881 4900 scope.go:117] "RemoveContainer" containerID="e520f6044690fd3e9fdd2f5d2a2f3eb92f848a198c178676ed55e43e0b0bb24e" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.829063 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.829308 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.830247 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.830304 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.834091 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.835104 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.835671 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.836176 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 15:17:02 crc kubenswrapper[4900]: I1202 15:17:02.836570 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.103772 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698cf7c765-k7g82"] Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.113072 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.117557 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698cf7c765-k7g82"] Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.195478 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvf6\" (UniqueName: \"kubernetes.io/projected/e66f9ce7-2207-474c-937c-52776e4700c3-kube-api-access-xfvf6\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.195524 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-config\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.195542 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-sb\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.195725 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-nb\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.195795 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-dns-svc\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.297366 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfvf6\" (UniqueName: \"kubernetes.io/projected/e66f9ce7-2207-474c-937c-52776e4700c3-kube-api-access-xfvf6\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.297407 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-config\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.297429 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-sb\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.297496 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-nb\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.297578 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-dns-svc\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.298469 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-config\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.298528 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-sb\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.298542 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-nb\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.298584 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-dns-svc\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.315330 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfvf6\" (UniqueName: \"kubernetes.io/projected/e66f9ce7-2207-474c-937c-52776e4700c3-kube-api-access-xfvf6\") pod \"dnsmasq-dns-698cf7c765-k7g82\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.436958 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.601197 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 15:17:03 crc kubenswrapper[4900]: I1202 15:17:03.927533 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698cf7c765-k7g82"] Dec 02 15:17:03 crc kubenswrapper[4900]: W1202 15:17:03.929417 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode66f9ce7_2207_474c_937c_52776e4700c3.slice/crio-262d25d0afeb731dba0c22ec51d99822f4e4664d25869d3ee52030c7bcf5149c WatchSource:0}: Error finding container 262d25d0afeb731dba0c22ec51d99822f4e4664d25869d3ee52030c7bcf5149c: Status 404 returned error can't find the container with id 262d25d0afeb731dba0c22ec51d99822f4e4664d25869d3ee52030c7bcf5149c Dec 02 15:17:04 crc kubenswrapper[4900]: I1202 15:17:04.599519 4900 generic.go:334] "Generic (PLEG): container finished" podID="e66f9ce7-2207-474c-937c-52776e4700c3" containerID="437f62d2b6ed9155378d8c428eeea494d6e71c09190a8e96b25511a3e0b25a12" exitCode=0 Dec 02 15:17:04 crc kubenswrapper[4900]: I1202 15:17:04.599612 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" event={"ID":"e66f9ce7-2207-474c-937c-52776e4700c3","Type":"ContainerDied","Data":"437f62d2b6ed9155378d8c428eeea494d6e71c09190a8e96b25511a3e0b25a12"} Dec 02 15:17:04 crc kubenswrapper[4900]: I1202 15:17:04.601067 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" event={"ID":"e66f9ce7-2207-474c-937c-52776e4700c3","Type":"ContainerStarted","Data":"262d25d0afeb731dba0c22ec51d99822f4e4664d25869d3ee52030c7bcf5149c"} Dec 02 15:17:05 crc kubenswrapper[4900]: I1202 15:17:05.617424 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" event={"ID":"e66f9ce7-2207-474c-937c-52776e4700c3","Type":"ContainerStarted","Data":"a8cb035b090ec988908c760f66805310e0c4ffbf56a26258197aa0e5b3d729fb"} Dec 02 15:17:05 crc kubenswrapper[4900]: I1202 15:17:05.617498 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:05 crc kubenswrapper[4900]: I1202 15:17:05.642881 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" podStartSLOduration=2.642863264 podStartE2EDuration="2.642863264s" podCreationTimestamp="2025-12-02 15:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:05.635707063 +0000 UTC m=+5671.051520924" watchObservedRunningTime="2025-12-02 15:17:05.642863264 +0000 UTC m=+5671.058677115" Dec 02 15:17:13 crc kubenswrapper[4900]: I1202 15:17:13.438852 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:13 crc kubenswrapper[4900]: I1202 15:17:13.535031 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5d4667dc-8gnfx"] Dec 02 15:17:13 crc kubenswrapper[4900]: I1202 15:17:13.535515 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerName="dnsmasq-dns" containerID="cri-o://68b0017238675fcf2907e4d9f261e64c42689db03ae2e5eaa9c74b49fa64d531" gracePeriod=10 Dec 02 15:17:13 crc kubenswrapper[4900]: I1202 15:17:13.699210 4900 generic.go:334] "Generic (PLEG): container finished" podID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerID="68b0017238675fcf2907e4d9f261e64c42689db03ae2e5eaa9c74b49fa64d531" exitCode=0 Dec 02 15:17:13 crc kubenswrapper[4900]: I1202 15:17:13.699254 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" event={"ID":"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277","Type":"ContainerDied","Data":"68b0017238675fcf2907e4d9f261e64c42689db03ae2e5eaa9c74b49fa64d531"} Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.044523 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.101573 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-config\") pod \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.101768 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-dns-svc\") pod \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.101818 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-nb\") pod \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.101873 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmsd4\" (UniqueName: \"kubernetes.io/projected/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-kube-api-access-jmsd4\") pod \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.101948 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-sb\") pod \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\" (UID: \"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277\") " Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.144534 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-kube-api-access-jmsd4" (OuterVolumeSpecName: "kube-api-access-jmsd4") pod "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" (UID: "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277"). InnerVolumeSpecName "kube-api-access-jmsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.169835 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" (UID: "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.176555 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" (UID: "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.185866 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" (UID: "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.187339 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-config" (OuterVolumeSpecName: "config") pod "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" (UID: "bc662ef3-57ef-4f0f-8a61-5a2bce3ba277"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.205765 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmsd4\" (UniqueName: \"kubernetes.io/projected/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-kube-api-access-jmsd4\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.205798 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.205808 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.205819 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.205827 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.707877 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" event={"ID":"bc662ef3-57ef-4f0f-8a61-5a2bce3ba277","Type":"ContainerDied","Data":"b6f3150e5dcffa99898bd94a00ca94a589707ba5ae8ea144e71411d248fd7ae3"} Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.707930 4900 scope.go:117] "RemoveContainer" containerID="68b0017238675fcf2907e4d9f261e64c42689db03ae2e5eaa9c74b49fa64d531" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.708049 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d4667dc-8gnfx" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.728561 4900 scope.go:117] "RemoveContainer" containerID="21ceda574dd748eaffa3b161eec4d4c5084f87e45eb3188fd2ed668b809e0b71" Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.743083 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5d4667dc-8gnfx"] Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.750768 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5d4667dc-8gnfx"] Dec 02 15:17:14 crc kubenswrapper[4900]: I1202 15:17:14.922171 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" path="/var/lib/kubelet/pods/bc662ef3-57ef-4f0f-8a61-5a2bce3ba277/volumes" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.116927 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.117030 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.696073 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-28b4-account-create-update-8xg7x"] Dec 02 15:17:15 crc kubenswrapper[4900]: E1202 15:17:15.697049 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerName="dnsmasq-dns" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.697066 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerName="dnsmasq-dns" Dec 02 15:17:15 crc kubenswrapper[4900]: E1202 15:17:15.697088 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerName="init" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.697098 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerName="init" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.697386 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc662ef3-57ef-4f0f-8a61-5a2bce3ba277" containerName="dnsmasq-dns" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.698165 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.707633 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zxh7q"] Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.709066 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.720051 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zxh7q"] Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.765906 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.766769 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-28b4-account-create-update-8xg7x"] Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.861676 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ckbt\" (UniqueName: \"kubernetes.io/projected/54cf169b-48e6-4515-8378-8301c440501e-kube-api-access-4ckbt\") pod \"cinder-db-create-zxh7q\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.861735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54cf169b-48e6-4515-8378-8301c440501e-operator-scripts\") pod \"cinder-db-create-zxh7q\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.861803 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8nd\" (UniqueName: \"kubernetes.io/projected/29f29261-e4ef-4678-baa6-faf96aebd096-kube-api-access-2b8nd\") pod \"cinder-28b4-account-create-update-8xg7x\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.861873 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f29261-e4ef-4678-baa6-faf96aebd096-operator-scripts\") pod \"cinder-28b4-account-create-update-8xg7x\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.963284 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ckbt\" (UniqueName: \"kubernetes.io/projected/54cf169b-48e6-4515-8378-8301c440501e-kube-api-access-4ckbt\") pod \"cinder-db-create-zxh7q\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.963324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54cf169b-48e6-4515-8378-8301c440501e-operator-scripts\") pod \"cinder-db-create-zxh7q\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.963352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b8nd\" (UniqueName: \"kubernetes.io/projected/29f29261-e4ef-4678-baa6-faf96aebd096-kube-api-access-2b8nd\") pod \"cinder-28b4-account-create-update-8xg7x\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.963407 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f29261-e4ef-4678-baa6-faf96aebd096-operator-scripts\") pod \"cinder-28b4-account-create-update-8xg7x\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.964099 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f29261-e4ef-4678-baa6-faf96aebd096-operator-scripts\") pod \"cinder-28b4-account-create-update-8xg7x\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.964099 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54cf169b-48e6-4515-8378-8301c440501e-operator-scripts\") pod \"cinder-db-create-zxh7q\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:15 crc kubenswrapper[4900]: I1202 15:17:15.999751 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ckbt\" (UniqueName: \"kubernetes.io/projected/54cf169b-48e6-4515-8378-8301c440501e-kube-api-access-4ckbt\") pod \"cinder-db-create-zxh7q\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.001201 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b8nd\" (UniqueName: \"kubernetes.io/projected/29f29261-e4ef-4678-baa6-faf96aebd096-kube-api-access-2b8nd\") pod \"cinder-28b4-account-create-update-8xg7x\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.094752 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.108151 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.567105 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-28b4-account-create-update-8xg7x"] Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.648835 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zxh7q"] Dec 02 15:17:16 crc kubenswrapper[4900]: W1202 15:17:16.658000 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54cf169b_48e6_4515_8378_8301c440501e.slice/crio-545e17ea3db589ff5e2f8df89fd5e66386f45d9a2035635ee3a2336d001c3d46 WatchSource:0}: Error finding container 545e17ea3db589ff5e2f8df89fd5e66386f45d9a2035635ee3a2336d001c3d46: Status 404 returned error can't find the container with id 545e17ea3db589ff5e2f8df89fd5e66386f45d9a2035635ee3a2336d001c3d46 Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.795057 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-28b4-account-create-update-8xg7x" event={"ID":"29f29261-e4ef-4678-baa6-faf96aebd096","Type":"ContainerStarted","Data":"a13e302e872bf70ff8b0de2dd5562eb90ae71d4b18cdf6b02368d4ecaa94595a"} Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.795103 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-28b4-account-create-update-8xg7x" event={"ID":"29f29261-e4ef-4678-baa6-faf96aebd096","Type":"ContainerStarted","Data":"7c9f386fc7cb4cb5036264eb9a579168c09e92e7d65e14be8dca2337c7025868"} Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.797434 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zxh7q" event={"ID":"54cf169b-48e6-4515-8378-8301c440501e","Type":"ContainerStarted","Data":"545e17ea3db589ff5e2f8df89fd5e66386f45d9a2035635ee3a2336d001c3d46"} Dec 02 15:17:16 crc kubenswrapper[4900]: I1202 15:17:16.818763 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-28b4-account-create-update-8xg7x" podStartSLOduration=1.818747852 podStartE2EDuration="1.818747852s" podCreationTimestamp="2025-12-02 15:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:16.811156209 +0000 UTC m=+5682.226970060" watchObservedRunningTime="2025-12-02 15:17:16.818747852 +0000 UTC m=+5682.234561703" Dec 02 15:17:17 crc kubenswrapper[4900]: I1202 15:17:17.808850 4900 generic.go:334] "Generic (PLEG): container finished" podID="29f29261-e4ef-4678-baa6-faf96aebd096" containerID="a13e302e872bf70ff8b0de2dd5562eb90ae71d4b18cdf6b02368d4ecaa94595a" exitCode=0 Dec 02 15:17:17 crc kubenswrapper[4900]: I1202 15:17:17.808930 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-28b4-account-create-update-8xg7x" event={"ID":"29f29261-e4ef-4678-baa6-faf96aebd096","Type":"ContainerDied","Data":"a13e302e872bf70ff8b0de2dd5562eb90ae71d4b18cdf6b02368d4ecaa94595a"} Dec 02 15:17:17 crc kubenswrapper[4900]: I1202 15:17:17.811121 4900 generic.go:334] "Generic (PLEG): container finished" podID="54cf169b-48e6-4515-8378-8301c440501e" containerID="fe63642aece313d3acd628cdeb5c5ae7494a8c3bf7e9c2bfe853f18445e03af3" exitCode=0 Dec 02 15:17:17 crc kubenswrapper[4900]: I1202 15:17:17.811165 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zxh7q" event={"ID":"54cf169b-48e6-4515-8378-8301c440501e","Type":"ContainerDied","Data":"fe63642aece313d3acd628cdeb5c5ae7494a8c3bf7e9c2bfe853f18445e03af3"} Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.229953 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.237141 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.327002 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ckbt\" (UniqueName: \"kubernetes.io/projected/54cf169b-48e6-4515-8378-8301c440501e-kube-api-access-4ckbt\") pod \"54cf169b-48e6-4515-8378-8301c440501e\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.327093 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f29261-e4ef-4678-baa6-faf96aebd096-operator-scripts\") pod \"29f29261-e4ef-4678-baa6-faf96aebd096\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.327245 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54cf169b-48e6-4515-8378-8301c440501e-operator-scripts\") pod \"54cf169b-48e6-4515-8378-8301c440501e\" (UID: \"54cf169b-48e6-4515-8378-8301c440501e\") " Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.327769 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f29261-e4ef-4678-baa6-faf96aebd096-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29f29261-e4ef-4678-baa6-faf96aebd096" (UID: "29f29261-e4ef-4678-baa6-faf96aebd096"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.327804 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54cf169b-48e6-4515-8378-8301c440501e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54cf169b-48e6-4515-8378-8301c440501e" (UID: "54cf169b-48e6-4515-8378-8301c440501e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.327884 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b8nd\" (UniqueName: \"kubernetes.io/projected/29f29261-e4ef-4678-baa6-faf96aebd096-kube-api-access-2b8nd\") pod \"29f29261-e4ef-4678-baa6-faf96aebd096\" (UID: \"29f29261-e4ef-4678-baa6-faf96aebd096\") " Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.328453 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f29261-e4ef-4678-baa6-faf96aebd096-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.328528 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54cf169b-48e6-4515-8378-8301c440501e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.331545 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f29261-e4ef-4678-baa6-faf96aebd096-kube-api-access-2b8nd" (OuterVolumeSpecName: "kube-api-access-2b8nd") pod "29f29261-e4ef-4678-baa6-faf96aebd096" (UID: "29f29261-e4ef-4678-baa6-faf96aebd096"). InnerVolumeSpecName "kube-api-access-2b8nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.333998 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54cf169b-48e6-4515-8378-8301c440501e-kube-api-access-4ckbt" (OuterVolumeSpecName: "kube-api-access-4ckbt") pod "54cf169b-48e6-4515-8378-8301c440501e" (UID: "54cf169b-48e6-4515-8378-8301c440501e"). InnerVolumeSpecName "kube-api-access-4ckbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.429565 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b8nd\" (UniqueName: \"kubernetes.io/projected/29f29261-e4ef-4678-baa6-faf96aebd096-kube-api-access-2b8nd\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.429592 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ckbt\" (UniqueName: \"kubernetes.io/projected/54cf169b-48e6-4515-8378-8301c440501e-kube-api-access-4ckbt\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.837035 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zxh7q" event={"ID":"54cf169b-48e6-4515-8378-8301c440501e","Type":"ContainerDied","Data":"545e17ea3db589ff5e2f8df89fd5e66386f45d9a2035635ee3a2336d001c3d46"} Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.837603 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="545e17ea3db589ff5e2f8df89fd5e66386f45d9a2035635ee3a2336d001c3d46" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.837115 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zxh7q" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.839562 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-28b4-account-create-update-8xg7x" event={"ID":"29f29261-e4ef-4678-baa6-faf96aebd096","Type":"ContainerDied","Data":"7c9f386fc7cb4cb5036264eb9a579168c09e92e7d65e14be8dca2337c7025868"} Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.839614 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-28b4-account-create-update-8xg7x" Dec 02 15:17:19 crc kubenswrapper[4900]: I1202 15:17:19.839619 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c9f386fc7cb4cb5036264eb9a579168c09e92e7d65e14be8dca2337c7025868" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.858535 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wrh6x"] Dec 02 15:17:20 crc kubenswrapper[4900]: E1202 15:17:20.859081 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f29261-e4ef-4678-baa6-faf96aebd096" containerName="mariadb-account-create-update" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.859097 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f29261-e4ef-4678-baa6-faf96aebd096" containerName="mariadb-account-create-update" Dec 02 15:17:20 crc kubenswrapper[4900]: E1202 15:17:20.859148 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54cf169b-48e6-4515-8378-8301c440501e" containerName="mariadb-database-create" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.859156 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="54cf169b-48e6-4515-8378-8301c440501e" containerName="mariadb-database-create" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.859365 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="54cf169b-48e6-4515-8378-8301c440501e" containerName="mariadb-database-create" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.859390 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f29261-e4ef-4678-baa6-faf96aebd096" containerName="mariadb-account-create-update" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.860191 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.862496 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.863175 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.863345 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bv4qs" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.869038 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wrh6x"] Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.954751 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct69n\" (UniqueName: \"kubernetes.io/projected/43f4a89e-fa23-47be-9747-b74a65735d5c-kube-api-access-ct69n\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.954805 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-config-data\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.954976 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43f4a89e-fa23-47be-9747-b74a65735d5c-etc-machine-id\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.955049 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-scripts\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.955085 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-db-sync-config-data\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:20 crc kubenswrapper[4900]: I1202 15:17:20.955187 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-combined-ca-bundle\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.057782 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct69n\" (UniqueName: \"kubernetes.io/projected/43f4a89e-fa23-47be-9747-b74a65735d5c-kube-api-access-ct69n\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.057864 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-config-data\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.057952 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43f4a89e-fa23-47be-9747-b74a65735d5c-etc-machine-id\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.058001 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-scripts\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.058045 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-db-sync-config-data\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.058094 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-combined-ca-bundle\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.058303 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43f4a89e-fa23-47be-9747-b74a65735d5c-etc-machine-id\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.063326 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-db-sync-config-data\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.064182 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-scripts\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.066501 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-config-data\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.073575 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-combined-ca-bundle\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.074376 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct69n\" (UniqueName: \"kubernetes.io/projected/43f4a89e-fa23-47be-9747-b74a65735d5c-kube-api-access-ct69n\") pod \"cinder-db-sync-wrh6x\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.191699 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.733495 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wrh6x"] Dec 02 15:17:21 crc kubenswrapper[4900]: W1202 15:17:21.737256 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43f4a89e_fa23_47be_9747_b74a65735d5c.slice/crio-1933186c8e2ebd2ece1ca6e50839fc1afca5070dc9261b03bd3844f78d608866 WatchSource:0}: Error finding container 1933186c8e2ebd2ece1ca6e50839fc1afca5070dc9261b03bd3844f78d608866: Status 404 returned error can't find the container with id 1933186c8e2ebd2ece1ca6e50839fc1afca5070dc9261b03bd3844f78d608866 Dec 02 15:17:21 crc kubenswrapper[4900]: I1202 15:17:21.867892 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrh6x" event={"ID":"43f4a89e-fa23-47be-9747-b74a65735d5c","Type":"ContainerStarted","Data":"1933186c8e2ebd2ece1ca6e50839fc1afca5070dc9261b03bd3844f78d608866"} Dec 02 15:17:22 crc kubenswrapper[4900]: I1202 15:17:22.881402 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrh6x" event={"ID":"43f4a89e-fa23-47be-9747-b74a65735d5c","Type":"ContainerStarted","Data":"ae880ad5056798052776a59981c137487fc399982a3a73a269c865600bbca156"} Dec 02 15:17:22 crc kubenswrapper[4900]: I1202 15:17:22.899500 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wrh6x" podStartSLOduration=2.899485671 podStartE2EDuration="2.899485671s" podCreationTimestamp="2025-12-02 15:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:22.896194319 +0000 UTC m=+5688.312008170" watchObservedRunningTime="2025-12-02 15:17:22.899485671 +0000 UTC m=+5688.315299522" Dec 02 15:17:25 crc kubenswrapper[4900]: I1202 15:17:25.926917 4900 generic.go:334] "Generic (PLEG): container finished" podID="43f4a89e-fa23-47be-9747-b74a65735d5c" containerID="ae880ad5056798052776a59981c137487fc399982a3a73a269c865600bbca156" exitCode=0 Dec 02 15:17:25 crc kubenswrapper[4900]: I1202 15:17:25.927288 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrh6x" event={"ID":"43f4a89e-fa23-47be-9747-b74a65735d5c","Type":"ContainerDied","Data":"ae880ad5056798052776a59981c137487fc399982a3a73a269c865600bbca156"} Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.289736 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418300 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-db-sync-config-data\") pod \"43f4a89e-fa23-47be-9747-b74a65735d5c\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418436 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43f4a89e-fa23-47be-9747-b74a65735d5c-etc-machine-id\") pod \"43f4a89e-fa23-47be-9747-b74a65735d5c\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418520 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct69n\" (UniqueName: \"kubernetes.io/projected/43f4a89e-fa23-47be-9747-b74a65735d5c-kube-api-access-ct69n\") pod \"43f4a89e-fa23-47be-9747-b74a65735d5c\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418631 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43f4a89e-fa23-47be-9747-b74a65735d5c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "43f4a89e-fa23-47be-9747-b74a65735d5c" (UID: "43f4a89e-fa23-47be-9747-b74a65735d5c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418676 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-config-data\") pod \"43f4a89e-fa23-47be-9747-b74a65735d5c\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418732 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-scripts\") pod \"43f4a89e-fa23-47be-9747-b74a65735d5c\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.418755 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-combined-ca-bundle\") pod \"43f4a89e-fa23-47be-9747-b74a65735d5c\" (UID: \"43f4a89e-fa23-47be-9747-b74a65735d5c\") " Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.419949 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43f4a89e-fa23-47be-9747-b74a65735d5c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.434682 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "43f4a89e-fa23-47be-9747-b74a65735d5c" (UID: "43f4a89e-fa23-47be-9747-b74a65735d5c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.435455 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f4a89e-fa23-47be-9747-b74a65735d5c-kube-api-access-ct69n" (OuterVolumeSpecName: "kube-api-access-ct69n") pod "43f4a89e-fa23-47be-9747-b74a65735d5c" (UID: "43f4a89e-fa23-47be-9747-b74a65735d5c"). InnerVolumeSpecName "kube-api-access-ct69n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.441262 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-scripts" (OuterVolumeSpecName: "scripts") pod "43f4a89e-fa23-47be-9747-b74a65735d5c" (UID: "43f4a89e-fa23-47be-9747-b74a65735d5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.464576 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f4a89e-fa23-47be-9747-b74a65735d5c" (UID: "43f4a89e-fa23-47be-9747-b74a65735d5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.494415 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-config-data" (OuterVolumeSpecName: "config-data") pod "43f4a89e-fa23-47be-9747-b74a65735d5c" (UID: "43f4a89e-fa23-47be-9747-b74a65735d5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.521718 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct69n\" (UniqueName: \"kubernetes.io/projected/43f4a89e-fa23-47be-9747-b74a65735d5c-kube-api-access-ct69n\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.521775 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.521793 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.521809 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.521829 4900 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f4a89e-fa23-47be-9747-b74a65735d5c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.952600 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrh6x" event={"ID":"43f4a89e-fa23-47be-9747-b74a65735d5c","Type":"ContainerDied","Data":"1933186c8e2ebd2ece1ca6e50839fc1afca5070dc9261b03bd3844f78d608866"} Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.952705 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1933186c8e2ebd2ece1ca6e50839fc1afca5070dc9261b03bd3844f78d608866" Dec 02 15:17:27 crc kubenswrapper[4900]: I1202 15:17:27.952739 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrh6x" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.281757 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d894f77c-x8xtc"] Dec 02 15:17:28 crc kubenswrapper[4900]: E1202 15:17:28.282477 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f4a89e-fa23-47be-9747-b74a65735d5c" containerName="cinder-db-sync" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.282493 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f4a89e-fa23-47be-9747-b74a65735d5c" containerName="cinder-db-sync" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.282763 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f4a89e-fa23-47be-9747-b74a65735d5c" containerName="cinder-db-sync" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.285263 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.292593 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d894f77c-x8xtc"] Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.435251 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-config\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.435304 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.435363 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lw7j\" (UniqueName: \"kubernetes.io/projected/3b477c94-42a7-4865-849a-d7a4a77c17dc-kube-api-access-2lw7j\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.435417 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-dns-svc\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.435556 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.536447 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.537082 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-config\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.537132 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.537151 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lw7j\" (UniqueName: \"kubernetes.io/projected/3b477c94-42a7-4865-849a-d7a4a77c17dc-kube-api-access-2lw7j\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.537177 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-dns-svc\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.537291 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.538020 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.538610 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-config\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.538607 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-dns-svc\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.539065 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.539611 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.542428 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.546906 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.547045 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.547186 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bv4qs" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.562448 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.601231 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lw7j\" (UniqueName: \"kubernetes.io/projected/3b477c94-42a7-4865-849a-d7a4a77c17dc-kube-api-access-2lw7j\") pod \"dnsmasq-dns-5d894f77c-x8xtc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.614959 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638555 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638633 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503388f-4299-472a-9d90-32f62770071f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638681 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503388f-4299-472a-9d90-32f62770071f-logs\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638702 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmg4\" (UniqueName: \"kubernetes.io/projected/f503388f-4299-472a-9d90-32f62770071f-kube-api-access-dqmg4\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638723 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638757 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.638813 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-scripts\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.744686 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745150 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503388f-4299-472a-9d90-32f62770071f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745190 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503388f-4299-472a-9d90-32f62770071f-logs\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745225 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqmg4\" (UniqueName: \"kubernetes.io/projected/f503388f-4299-472a-9d90-32f62770071f-kube-api-access-dqmg4\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745232 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503388f-4299-472a-9d90-32f62770071f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745259 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745316 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745405 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-scripts\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.745604 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503388f-4299-472a-9d90-32f62770071f-logs\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.752149 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.752913 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data-custom\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.753307 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-scripts\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.764261 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.779375 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqmg4\" (UniqueName: \"kubernetes.io/projected/f503388f-4299-472a-9d90-32f62770071f-kube-api-access-dqmg4\") pod \"cinder-api-0\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.873306 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.922929 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d894f77c-x8xtc"] Dec 02 15:17:28 crc kubenswrapper[4900]: I1202 15:17:28.978769 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" event={"ID":"3b477c94-42a7-4865-849a-d7a4a77c17dc","Type":"ContainerStarted","Data":"66b896a1c4e2f738474c9cb8937a7bf43d351597c1e7121d0fea027fbc3ad203"} Dec 02 15:17:29 crc kubenswrapper[4900]: I1202 15:17:29.391927 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:17:29 crc kubenswrapper[4900]: I1202 15:17:29.990166 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503388f-4299-472a-9d90-32f62770071f","Type":"ContainerStarted","Data":"92cf03dbb1b3080335a48f4019a6030f5fb50dc39a42b2d1a377b10ff2624cc3"} Dec 02 15:17:29 crc kubenswrapper[4900]: I1202 15:17:29.992865 4900 generic.go:334] "Generic (PLEG): container finished" podID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerID="d8b8d368ecb21e47bfb083c9add34b61383e431d44e67fd4ca7560e9c48ed7ef" exitCode=0 Dec 02 15:17:29 crc kubenswrapper[4900]: I1202 15:17:29.992905 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" event={"ID":"3b477c94-42a7-4865-849a-d7a4a77c17dc","Type":"ContainerDied","Data":"d8b8d368ecb21e47bfb083c9add34b61383e431d44e67fd4ca7560e9c48ed7ef"} Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.003039 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503388f-4299-472a-9d90-32f62770071f","Type":"ContainerStarted","Data":"55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed"} Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.003312 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503388f-4299-472a-9d90-32f62770071f","Type":"ContainerStarted","Data":"9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985"} Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.003328 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.005390 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" event={"ID":"3b477c94-42a7-4865-849a-d7a4a77c17dc","Type":"ContainerStarted","Data":"217076c6b2a28eeba77a19aef71169b0bf375bd7cf643fefefa08566c98775b8"} Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.005515 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.049665 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.049615405 podStartE2EDuration="3.049615405s" podCreationTimestamp="2025-12-02 15:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:31.025710874 +0000 UTC m=+5696.441524735" watchObservedRunningTime="2025-12-02 15:17:31.049615405 +0000 UTC m=+5696.465429256" Dec 02 15:17:31 crc kubenswrapper[4900]: I1202 15:17:31.056902 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" podStartSLOduration=3.056883749 podStartE2EDuration="3.056883749s" podCreationTimestamp="2025-12-02 15:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:31.04799433 +0000 UTC m=+5696.463808181" watchObservedRunningTime="2025-12-02 15:17:31.056883749 +0000 UTC m=+5696.472697610" Dec 02 15:17:38 crc kubenswrapper[4900]: I1202 15:17:38.616942 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:17:38 crc kubenswrapper[4900]: I1202 15:17:38.685690 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698cf7c765-k7g82"] Dec 02 15:17:38 crc kubenswrapper[4900]: I1202 15:17:38.685984 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" containerName="dnsmasq-dns" containerID="cri-o://a8cb035b090ec988908c760f66805310e0c4ffbf56a26258197aa0e5b3d729fb" gracePeriod=10 Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.096614 4900 generic.go:334] "Generic (PLEG): container finished" podID="e66f9ce7-2207-474c-937c-52776e4700c3" containerID="a8cb035b090ec988908c760f66805310e0c4ffbf56a26258197aa0e5b3d729fb" exitCode=0 Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.096909 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" event={"ID":"e66f9ce7-2207-474c-937c-52776e4700c3","Type":"ContainerDied","Data":"a8cb035b090ec988908c760f66805310e0c4ffbf56a26258197aa0e5b3d729fb"} Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.232420 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.368738 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-dns-svc\") pod \"e66f9ce7-2207-474c-937c-52776e4700c3\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.368888 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfvf6\" (UniqueName: \"kubernetes.io/projected/e66f9ce7-2207-474c-937c-52776e4700c3-kube-api-access-xfvf6\") pod \"e66f9ce7-2207-474c-937c-52776e4700c3\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.369041 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-nb\") pod \"e66f9ce7-2207-474c-937c-52776e4700c3\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.369067 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-config\") pod \"e66f9ce7-2207-474c-937c-52776e4700c3\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.369086 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-sb\") pod \"e66f9ce7-2207-474c-937c-52776e4700c3\" (UID: \"e66f9ce7-2207-474c-937c-52776e4700c3\") " Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.392897 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66f9ce7-2207-474c-937c-52776e4700c3-kube-api-access-xfvf6" (OuterVolumeSpecName: "kube-api-access-xfvf6") pod "e66f9ce7-2207-474c-937c-52776e4700c3" (UID: "e66f9ce7-2207-474c-937c-52776e4700c3"). InnerVolumeSpecName "kube-api-access-xfvf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.416899 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e66f9ce7-2207-474c-937c-52776e4700c3" (UID: "e66f9ce7-2207-474c-937c-52776e4700c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.421284 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e66f9ce7-2207-474c-937c-52776e4700c3" (UID: "e66f9ce7-2207-474c-937c-52776e4700c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.431812 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-config" (OuterVolumeSpecName: "config") pod "e66f9ce7-2207-474c-937c-52776e4700c3" (UID: "e66f9ce7-2207-474c-937c-52776e4700c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.442316 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e66f9ce7-2207-474c-937c-52776e4700c3" (UID: "e66f9ce7-2207-474c-937c-52776e4700c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.470854 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.470885 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.470896 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.470905 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfvf6\" (UniqueName: \"kubernetes.io/projected/e66f9ce7-2207-474c-937c-52776e4700c3-kube-api-access-xfvf6\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:39 crc kubenswrapper[4900]: I1202 15:17:39.470914 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e66f9ce7-2207-474c-937c-52776e4700c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.109227 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" event={"ID":"e66f9ce7-2207-474c-937c-52776e4700c3","Type":"ContainerDied","Data":"262d25d0afeb731dba0c22ec51d99822f4e4664d25869d3ee52030c7bcf5149c"} Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.109259 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698cf7c765-k7g82" Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.109309 4900 scope.go:117] "RemoveContainer" containerID="a8cb035b090ec988908c760f66805310e0c4ffbf56a26258197aa0e5b3d729fb" Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.149346 4900 scope.go:117] "RemoveContainer" containerID="437f62d2b6ed9155378d8c428eeea494d6e71c09190a8e96b25511a3e0b25a12" Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.164828 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698cf7c765-k7g82"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.179156 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698cf7c765-k7g82"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.191701 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.191970 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-log" containerID="cri-o://d65ba649dd817633b3a7fa3059bf6940eb40dce9e7b318ec4d2cef1b9b4f9707" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.192118 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-metadata" containerID="cri-o://b02d4227a7252caf3a8275e2ce467d29880ce1596272424efd5faf72769e2538" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.205510 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.205760 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9548e976-6686-45d3-9b04-74b567fc4b5d" containerName="nova-cell0-conductor-conductor" containerID="cri-o://1f6c3681e4f06007f3b716ee5692554a89699f0a7937879ffce2af1d3b01fcd7" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.224715 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.225040 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-log" containerID="cri-o://1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.225560 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-api" containerID="cri-o://425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.239296 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.239543 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="11e8048a-0901-478c-bf94-cc9fb894bdad" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.254200 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.254531 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" containerName="nova-scheduler-scheduler" containerID="cri-o://da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.297980 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.298189 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="64ee25b4-c05d-4fa2-97fd-72687a003c57" containerName="nova-cell1-conductor-conductor" containerID="cri-o://f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92" gracePeriod=30 Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.950040 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" path="/var/lib/kubelet/pods/e66f9ce7-2207-474c-937c-52776e4700c3/volumes" Dec 02 15:17:40 crc kubenswrapper[4900]: I1202 15:17:40.968815 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.069903 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.131265 4900 generic.go:334] "Generic (PLEG): container finished" podID="11e8048a-0901-478c-bf94-cc9fb894bdad" containerID="5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320" exitCode=0 Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.131777 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11e8048a-0901-478c-bf94-cc9fb894bdad","Type":"ContainerDied","Data":"5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320"} Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.132379 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11e8048a-0901-478c-bf94-cc9fb894bdad","Type":"ContainerDied","Data":"9047e6f5a61266ad9bef85fda2d04d3c8f88748d7e8294f554f25b9d374d0de1"} Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.132406 4900 scope.go:117] "RemoveContainer" containerID="5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.132668 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.142913 4900 generic.go:334] "Generic (PLEG): container finished" podID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerID="d65ba649dd817633b3a7fa3059bf6940eb40dce9e7b318ec4d2cef1b9b4f9707" exitCode=143 Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.142991 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94e1f486-c3e5-420a-b8af-de18cb2b73b2","Type":"ContainerDied","Data":"d65ba649dd817633b3a7fa3059bf6940eb40dce9e7b318ec4d2cef1b9b4f9707"} Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.149379 4900 generic.go:334] "Generic (PLEG): container finished" podID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerID="1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58" exitCode=143 Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.149430 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24a2c9ef-380a-4ad0-8756-95fe3df13d3d","Type":"ContainerDied","Data":"1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58"} Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.160439 4900 scope.go:117] "RemoveContainer" containerID="5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320" Dec 02 15:17:41 crc kubenswrapper[4900]: E1202 15:17:41.160922 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320\": container with ID starting with 5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320 not found: ID does not exist" containerID="5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.160949 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320"} err="failed to get container status \"5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320\": rpc error: code = NotFound desc = could not find container \"5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320\": container with ID starting with 5083f55f4cbb32f9bba8ec5ae1477f39d9df189aa594da96e0f1807ef42a1320 not found: ID does not exist" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.200231 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-config-data\") pod \"11e8048a-0901-478c-bf94-cc9fb894bdad\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.200405 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qms8\" (UniqueName: \"kubernetes.io/projected/11e8048a-0901-478c-bf94-cc9fb894bdad-kube-api-access-9qms8\") pod \"11e8048a-0901-478c-bf94-cc9fb894bdad\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.200462 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-combined-ca-bundle\") pod \"11e8048a-0901-478c-bf94-cc9fb894bdad\" (UID: \"11e8048a-0901-478c-bf94-cc9fb894bdad\") " Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.210837 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e8048a-0901-478c-bf94-cc9fb894bdad-kube-api-access-9qms8" (OuterVolumeSpecName: "kube-api-access-9qms8") pod "11e8048a-0901-478c-bf94-cc9fb894bdad" (UID: "11e8048a-0901-478c-bf94-cc9fb894bdad"). InnerVolumeSpecName "kube-api-access-9qms8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.225139 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-config-data" (OuterVolumeSpecName: "config-data") pod "11e8048a-0901-478c-bf94-cc9fb894bdad" (UID: "11e8048a-0901-478c-bf94-cc9fb894bdad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.225831 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11e8048a-0901-478c-bf94-cc9fb894bdad" (UID: "11e8048a-0901-478c-bf94-cc9fb894bdad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.302562 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.302618 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e8048a-0901-478c-bf94-cc9fb894bdad-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.302637 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qms8\" (UniqueName: \"kubernetes.io/projected/11e8048a-0901-478c-bf94-cc9fb894bdad-kube-api-access-9qms8\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.466252 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.477455 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.499054 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:17:41 crc kubenswrapper[4900]: E1202 15:17:41.499532 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" containerName="dnsmasq-dns" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.499548 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" containerName="dnsmasq-dns" Dec 02 15:17:41 crc kubenswrapper[4900]: E1202 15:17:41.499564 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e8048a-0901-478c-bf94-cc9fb894bdad" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.499571 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e8048a-0901-478c-bf94-cc9fb894bdad" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 15:17:41 crc kubenswrapper[4900]: E1202 15:17:41.499608 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" containerName="init" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.499614 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" containerName="init" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.499875 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66f9ce7-2207-474c-937c-52776e4700c3" containerName="dnsmasq-dns" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.499890 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e8048a-0901-478c-bf94-cc9fb894bdad" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.500515 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.502972 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.510403 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.606182 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e426fe-b93c-46cf-979b-69fc1eff7684-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.606362 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e426fe-b93c-46cf-979b-69fc1eff7684-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.606399 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b668f\" (UniqueName: \"kubernetes.io/projected/12e426fe-b93c-46cf-979b-69fc1eff7684-kube-api-access-b668f\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.707885 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e426fe-b93c-46cf-979b-69fc1eff7684-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.707954 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b668f\" (UniqueName: \"kubernetes.io/projected/12e426fe-b93c-46cf-979b-69fc1eff7684-kube-api-access-b668f\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.708030 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e426fe-b93c-46cf-979b-69fc1eff7684-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.715752 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e426fe-b93c-46cf-979b-69fc1eff7684-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.720040 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e426fe-b93c-46cf-979b-69fc1eff7684-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.732475 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b668f\" (UniqueName: \"kubernetes.io/projected/12e426fe-b93c-46cf-979b-69fc1eff7684-kube-api-access-b668f\") pod \"nova-cell1-novncproxy-0\" (UID: \"12e426fe-b93c-46cf-979b-69fc1eff7684\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:41 crc kubenswrapper[4900]: I1202 15:17:41.830319 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.310985 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.588690 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.727685 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmrb\" (UniqueName: \"kubernetes.io/projected/64ee25b4-c05d-4fa2-97fd-72687a003c57-kube-api-access-wgmrb\") pod \"64ee25b4-c05d-4fa2-97fd-72687a003c57\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.727802 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-combined-ca-bundle\") pod \"64ee25b4-c05d-4fa2-97fd-72687a003c57\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.727920 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-config-data\") pod \"64ee25b4-c05d-4fa2-97fd-72687a003c57\" (UID: \"64ee25b4-c05d-4fa2-97fd-72687a003c57\") " Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.733174 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ee25b4-c05d-4fa2-97fd-72687a003c57-kube-api-access-wgmrb" (OuterVolumeSpecName: "kube-api-access-wgmrb") pod "64ee25b4-c05d-4fa2-97fd-72687a003c57" (UID: "64ee25b4-c05d-4fa2-97fd-72687a003c57"). InnerVolumeSpecName "kube-api-access-wgmrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.760433 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-config-data" (OuterVolumeSpecName: "config-data") pod "64ee25b4-c05d-4fa2-97fd-72687a003c57" (UID: "64ee25b4-c05d-4fa2-97fd-72687a003c57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.761333 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64ee25b4-c05d-4fa2-97fd-72687a003c57" (UID: "64ee25b4-c05d-4fa2-97fd-72687a003c57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.830208 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmrb\" (UniqueName: \"kubernetes.io/projected/64ee25b4-c05d-4fa2-97fd-72687a003c57-kube-api-access-wgmrb\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.830242 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.830252 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ee25b4-c05d-4fa2-97fd-72687a003c57-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:42 crc kubenswrapper[4900]: I1202 15:17:42.923576 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e8048a-0901-478c-bf94-cc9fb894bdad" path="/var/lib/kubelet/pods/11e8048a-0901-478c-bf94-cc9fb894bdad/volumes" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.173017 4900 generic.go:334] "Generic (PLEG): container finished" podID="64ee25b4-c05d-4fa2-97fd-72687a003c57" containerID="f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92" exitCode=0 Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.173123 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64ee25b4-c05d-4fa2-97fd-72687a003c57","Type":"ContainerDied","Data":"f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92"} Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.173201 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64ee25b4-c05d-4fa2-97fd-72687a003c57","Type":"ContainerDied","Data":"3779e4e1c65b0520d8171ef6f15760ebfb2a07011fc775fbc362856373d962bc"} Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.173226 4900 scope.go:117] "RemoveContainer" containerID="f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.173382 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.175394 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"12e426fe-b93c-46cf-979b-69fc1eff7684","Type":"ContainerStarted","Data":"555deb0ca5e0c679e8ce5ae8d3107b3f5fb1bb70ac38d3c5e3352639eba0e334"} Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.175504 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"12e426fe-b93c-46cf-979b-69fc1eff7684","Type":"ContainerStarted","Data":"a8ca001a2a6d85b5ed54e09e3daa091c6d150a2e1cb2ca40bd9d114eaf8d2a79"} Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.203204 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.230407 4900 scope.go:117] "RemoveContainer" containerID="f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.231407 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:17:43 crc kubenswrapper[4900]: E1202 15:17:43.231736 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92\": container with ID starting with f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92 not found: ID does not exist" containerID="f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.231879 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92"} err="failed to get container status \"f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92\": rpc error: code = NotFound desc = could not find container \"f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92\": container with ID starting with f7b393fb572a0111ecdf44e0366250ab79f0ef9604779d1eed212c7967376b92 not found: ID does not exist" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.244766 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:17:43 crc kubenswrapper[4900]: E1202 15:17:43.246224 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ee25b4-c05d-4fa2-97fd-72687a003c57" containerName="nova-cell1-conductor-conductor" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.246263 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ee25b4-c05d-4fa2-97fd-72687a003c57" containerName="nova-cell1-conductor-conductor" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.247050 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ee25b4-c05d-4fa2-97fd-72687a003c57" containerName="nova-cell1-conductor-conductor" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.247861 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.247787623 podStartE2EDuration="2.247787623s" podCreationTimestamp="2025-12-02 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:43.228303447 +0000 UTC m=+5708.644117298" watchObservedRunningTime="2025-12-02 15:17:43.247787623 +0000 UTC m=+5708.663601484" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.248556 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.251810 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.295043 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.337603 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.337706 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.337854 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p42zb\" (UniqueName: \"kubernetes.io/projected/67fefbb5-0c22-4a87-bd43-80325328c3e2-kube-api-access-p42zb\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.402695 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.71:8774/\": read tcp 10.217.0.2:51348->10.217.1.71:8774: read: connection reset by peer" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.402748 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.71:8774/\": read tcp 10.217.0.2:51352->10.217.1.71:8774: read: connection reset by peer" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.439939 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.440062 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p42zb\" (UniqueName: \"kubernetes.io/projected/67fefbb5-0c22-4a87-bd43-80325328c3e2-kube-api-access-p42zb\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.440106 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.446535 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.446598 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.466384 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p42zb\" (UniqueName: \"kubernetes.io/projected/67fefbb5-0c22-4a87-bd43-80325328c3e2-kube-api-access-p42zb\") pod \"nova-cell1-conductor-0\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.586496 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.637378 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": read tcp 10.217.0.2:39698->10.217.1.72:8775: read: connection reset by peer" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.637963 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": read tcp 10.217.0.2:39686->10.217.1.72:8775: read: connection reset by peer" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.796681 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.849635 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-logs\") pod \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.849760 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-config-data\") pod \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.849876 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-combined-ca-bundle\") pod \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.849987 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46nj7\" (UniqueName: \"kubernetes.io/projected/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-kube-api-access-46nj7\") pod \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\" (UID: \"24a2c9ef-380a-4ad0-8756-95fe3df13d3d\") " Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.851330 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-logs" (OuterVolumeSpecName: "logs") pod "24a2c9ef-380a-4ad0-8756-95fe3df13d3d" (UID: "24a2c9ef-380a-4ad0-8756-95fe3df13d3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.880453 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-kube-api-access-46nj7" (OuterVolumeSpecName: "kube-api-access-46nj7") pod "24a2c9ef-380a-4ad0-8756-95fe3df13d3d" (UID: "24a2c9ef-380a-4ad0-8756-95fe3df13d3d"). InnerVolumeSpecName "kube-api-access-46nj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.900748 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-config-data" (OuterVolumeSpecName: "config-data") pod "24a2c9ef-380a-4ad0-8756-95fe3df13d3d" (UID: "24a2c9ef-380a-4ad0-8756-95fe3df13d3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.914831 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24a2c9ef-380a-4ad0-8756-95fe3df13d3d" (UID: "24a2c9ef-380a-4ad0-8756-95fe3df13d3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.952417 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.952445 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.952455 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:43 crc kubenswrapper[4900]: I1202 15:17:43.952464 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46nj7\" (UniqueName: \"kubernetes.io/projected/24a2c9ef-380a-4ad0-8756-95fe3df13d3d-kube-api-access-46nj7\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.150288 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.199360 4900 generic.go:334] "Generic (PLEG): container finished" podID="9548e976-6686-45d3-9b04-74b567fc4b5d" containerID="1f6c3681e4f06007f3b716ee5692554a89699f0a7937879ffce2af1d3b01fcd7" exitCode=0 Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.199425 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9548e976-6686-45d3-9b04-74b567fc4b5d","Type":"ContainerDied","Data":"1f6c3681e4f06007f3b716ee5692554a89699f0a7937879ffce2af1d3b01fcd7"} Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.200984 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"67fefbb5-0c22-4a87-bd43-80325328c3e2","Type":"ContainerStarted","Data":"5a975c05af751fb221e510fe57e3f2dd4a220bed97ab40cb583a9778dc7bee7b"} Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.209133 4900 generic.go:334] "Generic (PLEG): container finished" podID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerID="b02d4227a7252caf3a8275e2ce467d29880ce1596272424efd5faf72769e2538" exitCode=0 Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.209199 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94e1f486-c3e5-420a-b8af-de18cb2b73b2","Type":"ContainerDied","Data":"b02d4227a7252caf3a8275e2ce467d29880ce1596272424efd5faf72769e2538"} Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.223044 4900 generic.go:334] "Generic (PLEG): container finished" podID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerID="425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98" exitCode=0 Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.223149 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.223235 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24a2c9ef-380a-4ad0-8756-95fe3df13d3d","Type":"ContainerDied","Data":"425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98"} Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.223271 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24a2c9ef-380a-4ad0-8756-95fe3df13d3d","Type":"ContainerDied","Data":"52c923a6d7824423e6043d7bc2310d3ca5761a8c29db8b4cc58cd853b29067e1"} Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.223295 4900 scope.go:117] "RemoveContainer" containerID="425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.315038 4900 scope.go:117] "RemoveContainer" containerID="1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.319332 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.361312 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.361913 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpnwj\" (UniqueName: \"kubernetes.io/projected/94e1f486-c3e5-420a-b8af-de18cb2b73b2-kube-api-access-hpnwj\") pod \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.361953 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e1f486-c3e5-420a-b8af-de18cb2b73b2-logs\") pod \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.362010 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-combined-ca-bundle\") pod \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.362219 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-config-data\") pod \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\" (UID: \"94e1f486-c3e5-420a-b8af-de18cb2b73b2\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.364751 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e1f486-c3e5-420a-b8af-de18cb2b73b2-logs" (OuterVolumeSpecName: "logs") pod "94e1f486-c3e5-420a-b8af-de18cb2b73b2" (UID: "94e1f486-c3e5-420a-b8af-de18cb2b73b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.399749 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e1f486-c3e5-420a-b8af-de18cb2b73b2-kube-api-access-hpnwj" (OuterVolumeSpecName: "kube-api-access-hpnwj") pod "94e1f486-c3e5-420a-b8af-de18cb2b73b2" (UID: "94e1f486-c3e5-420a-b8af-de18cb2b73b2"). InnerVolumeSpecName "kube-api-access-hpnwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.408710 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.420289 4900 scope.go:117] "RemoveContainer" containerID="425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98" Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.423816 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98\": container with ID starting with 425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98 not found: ID does not exist" containerID="425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.423870 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98"} err="failed to get container status \"425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98\": rpc error: code = NotFound desc = could not find container \"425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98\": container with ID starting with 425490e47d5384c03d17fa6ff7ab300aecd0e7bb3743bfd5af7364a6bf276e98 not found: ID does not exist" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.423900 4900 scope.go:117] "RemoveContainer" containerID="1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58" Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.425100 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58\": container with ID starting with 1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58 not found: ID does not exist" containerID="1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.425227 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58"} err="failed to get container status \"1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58\": rpc error: code = NotFound desc = could not find container \"1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58\": container with ID starting with 1843d2206b43be69862a6ddbed07052eff2ae895834579fefabb5e0038b51b58 not found: ID does not exist" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.434636 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.435150 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-api" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.435212 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-api" Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.435282 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-metadata" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.435348 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-metadata" Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.435414 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-log" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.435461 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-log" Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.435510 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-log" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.437125 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-log" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.437404 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-log" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.437484 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" containerName="nova-metadata-metadata" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.437549 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-log" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.437602 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" containerName="nova-api-api" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.439087 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.442208 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.461117 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-config-data" (OuterVolumeSpecName: "config-data") pod "94e1f486-c3e5-420a-b8af-de18cb2b73b2" (UID: "94e1f486-c3e5-420a-b8af-de18cb2b73b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.464995 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.465031 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpnwj\" (UniqueName: \"kubernetes.io/projected/94e1f486-c3e5-420a-b8af-de18cb2b73b2-kube-api-access-hpnwj\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.465041 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94e1f486-c3e5-420a-b8af-de18cb2b73b2-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.480815 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.497765 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94e1f486-c3e5-420a-b8af-de18cb2b73b2" (UID: "94e1f486-c3e5-420a-b8af-de18cb2b73b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.502795 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.566541 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l57fc\" (UniqueName: \"kubernetes.io/projected/9548e976-6686-45d3-9b04-74b567fc4b5d-kube-api-access-l57fc\") pod \"9548e976-6686-45d3-9b04-74b567fc4b5d\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.566631 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-config-data\") pod \"9548e976-6686-45d3-9b04-74b567fc4b5d\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.566685 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-combined-ca-bundle\") pod \"9548e976-6686-45d3-9b04-74b567fc4b5d\" (UID: \"9548e976-6686-45d3-9b04-74b567fc4b5d\") " Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.567012 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.567056 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-config-data\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.567075 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvjz\" (UniqueName: \"kubernetes.io/projected/1b0f74c1-cc44-44ac-a262-eea482b36ca8-kube-api-access-dqvjz\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.567109 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b0f74c1-cc44-44ac-a262-eea482b36ca8-logs\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.567184 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94e1f486-c3e5-420a-b8af-de18cb2b73b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.569507 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9548e976-6686-45d3-9b04-74b567fc4b5d-kube-api-access-l57fc" (OuterVolumeSpecName: "kube-api-access-l57fc") pod "9548e976-6686-45d3-9b04-74b567fc4b5d" (UID: "9548e976-6686-45d3-9b04-74b567fc4b5d"). InnerVolumeSpecName "kube-api-access-l57fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.596984 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-config-data" (OuterVolumeSpecName: "config-data") pod "9548e976-6686-45d3-9b04-74b567fc4b5d" (UID: "9548e976-6686-45d3-9b04-74b567fc4b5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.600175 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9548e976-6686-45d3-9b04-74b567fc4b5d" (UID: "9548e976-6686-45d3-9b04-74b567fc4b5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669377 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b0f74c1-cc44-44ac-a262-eea482b36ca8-logs\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669568 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669617 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-config-data\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669666 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvjz\" (UniqueName: \"kubernetes.io/projected/1b0f74c1-cc44-44ac-a262-eea482b36ca8-kube-api-access-dqvjz\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669752 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l57fc\" (UniqueName: \"kubernetes.io/projected/9548e976-6686-45d3-9b04-74b567fc4b5d-kube-api-access-l57fc\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669768 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.669779 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9548e976-6686-45d3-9b04-74b567fc4b5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.670530 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b0f74c1-cc44-44ac-a262-eea482b36ca8-logs\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.674478 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-config-data\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.675394 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.676391 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.677944 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.679350 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 15:17:44 crc kubenswrapper[4900]: E1202 15:17:44.679382 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" containerName="nova-scheduler-scheduler" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.690272 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvjz\" (UniqueName: \"kubernetes.io/projected/1b0f74c1-cc44-44ac-a262-eea482b36ca8-kube-api-access-dqvjz\") pod \"nova-api-0\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.774303 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.933618 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a2c9ef-380a-4ad0-8756-95fe3df13d3d" path="/var/lib/kubelet/pods/24a2c9ef-380a-4ad0-8756-95fe3df13d3d/volumes" Dec 02 15:17:44 crc kubenswrapper[4900]: I1202 15:17:44.934865 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ee25b4-c05d-4fa2-97fd-72687a003c57" path="/var/lib/kubelet/pods/64ee25b4-c05d-4fa2-97fd-72687a003c57/volumes" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.116394 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.116468 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.116516 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.117246 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.117307 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" gracePeriod=600 Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.245267 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.245258 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94e1f486-c3e5-420a-b8af-de18cb2b73b2","Type":"ContainerDied","Data":"114b880da7695babf7bef4089309d793549c3f6ebad12af50e74cd49d69bcc86"} Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.245446 4900 scope.go:117] "RemoveContainer" containerID="b02d4227a7252caf3a8275e2ce467d29880ce1596272424efd5faf72769e2538" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.257313 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9548e976-6686-45d3-9b04-74b567fc4b5d","Type":"ContainerDied","Data":"a93b9be05c7ecadc2ed8b755a761c0590001dfc7ace84bcea6ff28ad693d0467"} Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.257327 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: E1202 15:17:45.259688 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.270104 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"67fefbb5-0c22-4a87-bd43-80325328c3e2","Type":"ContainerStarted","Data":"56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b"} Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.271536 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.295212 4900 scope.go:117] "RemoveContainer" containerID="d65ba649dd817633b3a7fa3059bf6940eb40dce9e7b318ec4d2cef1b9b4f9707" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.297924 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.297875837 podStartE2EDuration="2.297875837s" podCreationTimestamp="2025-12-02 15:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:45.288059812 +0000 UTC m=+5710.703873663" watchObservedRunningTime="2025-12-02 15:17:45.297875837 +0000 UTC m=+5710.713689708" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.325092 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.341951 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.347258 4900 scope.go:117] "RemoveContainer" containerID="1f6c3681e4f06007f3b716ee5692554a89699f0a7937879ffce2af1d3b01fcd7" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.371127 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.386964 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.406001 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: E1202 15:17:45.406425 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9548e976-6686-45d3-9b04-74b567fc4b5d" containerName="nova-cell0-conductor-conductor" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.406441 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9548e976-6686-45d3-9b04-74b567fc4b5d" containerName="nova-cell0-conductor-conductor" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.406631 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9548e976-6686-45d3-9b04-74b567fc4b5d" containerName="nova-cell0-conductor-conductor" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.407557 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.411144 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.433784 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.437363 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.444398 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.444616 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.463513 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.487899 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.488735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95rc\" (UniqueName: \"kubernetes.io/projected/b504d673-a6a9-401b-bc03-47a24ac82901-kube-api-access-j95rc\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.488803 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-logs\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.488833 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.488863 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.488901 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-config-data\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.489061 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.489290 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rs84\" (UniqueName: \"kubernetes.io/projected/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-kube-api-access-8rs84\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.602900 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95rc\" (UniqueName: \"kubernetes.io/projected/b504d673-a6a9-401b-bc03-47a24ac82901-kube-api-access-j95rc\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.603169 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-logs\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.603272 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.603384 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.603491 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-config-data\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.603637 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.603824 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rs84\" (UniqueName: \"kubernetes.io/projected/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-kube-api-access-8rs84\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.604822 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-logs\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.613518 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-config-data\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.617592 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.619115 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rs84\" (UniqueName: \"kubernetes.io/projected/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-kube-api-access-8rs84\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.628847 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.639276 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.643777 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95rc\" (UniqueName: \"kubernetes.io/projected/b504d673-a6a9-401b-bc03-47a24ac82901-kube-api-access-j95rc\") pod \"nova-cell0-conductor-0\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.739755 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 15:17:45 crc kubenswrapper[4900]: I1202 15:17:45.757890 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.237528 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.286610 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b0f74c1-cc44-44ac-a262-eea482b36ca8","Type":"ContainerStarted","Data":"23f26d6d7f0fab6b86022862139e84f21ca42cf481e2b4fc0c241b472792b2a4"} Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.286681 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b0f74c1-cc44-44ac-a262-eea482b36ca8","Type":"ContainerStarted","Data":"98716c9c32801896c448a0c761da06478f08646b7cb6d5f2eff8e4deabceb5b7"} Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.286695 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b0f74c1-cc44-44ac-a262-eea482b36ca8","Type":"ContainerStarted","Data":"1e1154e75af84e18117c51bbd478fc98fc6d2086b764f97ffee4718aa9b58456"} Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.291144 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" exitCode=0 Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.291201 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e"} Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.291229 4900 scope.go:117] "RemoveContainer" containerID="3b76a522fc29ab4b883e8d52d8ae8d1cc61b9e17f09e1711cca595a73a978fea" Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.291566 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:17:46 crc kubenswrapper[4900]: E1202 15:17:46.291788 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.295149 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff591ec8-ae35-4a05-b7e1-99b63b7125d7","Type":"ContainerStarted","Data":"7a3e5e11c81ccc5e48911186af2b2bf1e334db2f5daef7d1d2cb085167a11188"} Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.313510 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3134890390000002 podStartE2EDuration="2.313489039s" podCreationTimestamp="2025-12-02 15:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:46.302508171 +0000 UTC m=+5711.718322022" watchObservedRunningTime="2025-12-02 15:17:46.313489039 +0000 UTC m=+5711.729302900" Dec 02 15:17:46 crc kubenswrapper[4900]: W1202 15:17:46.356011 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb504d673_a6a9_401b_bc03_47a24ac82901.slice/crio-bc1cc3a72f0d4b5503b92137f328bebdb55f597a1212fe47b99b39e452c5a8fb WatchSource:0}: Error finding container bc1cc3a72f0d4b5503b92137f328bebdb55f597a1212fe47b99b39e452c5a8fb: Status 404 returned error can't find the container with id bc1cc3a72f0d4b5503b92137f328bebdb55f597a1212fe47b99b39e452c5a8fb Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.363376 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.831358 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.921831 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e1f486-c3e5-420a-b8af-de18cb2b73b2" path="/var/lib/kubelet/pods/94e1f486-c3e5-420a-b8af-de18cb2b73b2/volumes" Dec 02 15:17:46 crc kubenswrapper[4900]: I1202 15:17:46.922472 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9548e976-6686-45d3-9b04-74b567fc4b5d" path="/var/lib/kubelet/pods/9548e976-6686-45d3-9b04-74b567fc4b5d/volumes" Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.305039 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b504d673-a6a9-401b-bc03-47a24ac82901","Type":"ContainerStarted","Data":"5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968"} Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.305086 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b504d673-a6a9-401b-bc03-47a24ac82901","Type":"ContainerStarted","Data":"bc1cc3a72f0d4b5503b92137f328bebdb55f597a1212fe47b99b39e452c5a8fb"} Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.305462 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.309631 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff591ec8-ae35-4a05-b7e1-99b63b7125d7","Type":"ContainerStarted","Data":"647b9ca80ba02788c97c620fa0f31986be3035d35e9afccc42a7975903cce145"} Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.309718 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff591ec8-ae35-4a05-b7e1-99b63b7125d7","Type":"ContainerStarted","Data":"e82da1ed9487f2b43c135a4c9c5aacd7a7e6578d47a8c91ecaac3cada80d60e3"} Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.350022 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.349999498 podStartE2EDuration="2.349999498s" podCreationTimestamp="2025-12-02 15:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:47.341589212 +0000 UTC m=+5712.757403073" watchObservedRunningTime="2025-12-02 15:17:47.349999498 +0000 UTC m=+5712.765813359" Dec 02 15:17:47 crc kubenswrapper[4900]: I1202 15:17:47.352397 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.352385545 podStartE2EDuration="2.352385545s" podCreationTimestamp="2025-12-02 15:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:47.321519078 +0000 UTC m=+5712.737332939" watchObservedRunningTime="2025-12-02 15:17:47.352385545 +0000 UTC m=+5712.768199396" Dec 02 15:17:48 crc kubenswrapper[4900]: I1202 15:17:48.987848 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.070173 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-config-data\") pod \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.070461 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-combined-ca-bundle\") pod \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.070517 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx869\" (UniqueName: \"kubernetes.io/projected/402900ed-44f4-42fe-b2ff-1fb701a09cf2-kube-api-access-nx869\") pod \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\" (UID: \"402900ed-44f4-42fe-b2ff-1fb701a09cf2\") " Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.083424 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402900ed-44f4-42fe-b2ff-1fb701a09cf2-kube-api-access-nx869" (OuterVolumeSpecName: "kube-api-access-nx869") pod "402900ed-44f4-42fe-b2ff-1fb701a09cf2" (UID: "402900ed-44f4-42fe-b2ff-1fb701a09cf2"). InnerVolumeSpecName "kube-api-access-nx869". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.095725 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "402900ed-44f4-42fe-b2ff-1fb701a09cf2" (UID: "402900ed-44f4-42fe-b2ff-1fb701a09cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.105019 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-config-data" (OuterVolumeSpecName: "config-data") pod "402900ed-44f4-42fe-b2ff-1fb701a09cf2" (UID: "402900ed-44f4-42fe-b2ff-1fb701a09cf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.173082 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.173116 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402900ed-44f4-42fe-b2ff-1fb701a09cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.173130 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx869\" (UniqueName: \"kubernetes.io/projected/402900ed-44f4-42fe-b2ff-1fb701a09cf2-kube-api-access-nx869\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.332259 4900 generic.go:334] "Generic (PLEG): container finished" podID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" exitCode=0 Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.332308 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"402900ed-44f4-42fe-b2ff-1fb701a09cf2","Type":"ContainerDied","Data":"da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047"} Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.332329 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.332340 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"402900ed-44f4-42fe-b2ff-1fb701a09cf2","Type":"ContainerDied","Data":"99a94f21f9b45194ce04a23e8f556b55d8f4b5debcbdbb82374833b671a5b183"} Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.332362 4900 scope.go:117] "RemoveContainer" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.364811 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.366003 4900 scope.go:117] "RemoveContainer" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" Dec 02 15:17:49 crc kubenswrapper[4900]: E1202 15:17:49.366734 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047\": container with ID starting with da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047 not found: ID does not exist" containerID="da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.366848 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047"} err="failed to get container status \"da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047\": rpc error: code = NotFound desc = could not find container \"da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047\": container with ID starting with da2e68339bec64a9bb6ab543640e0d55ab1db7395ffeda72b6e7cb6b0ef89047 not found: ID does not exist" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.375428 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.389386 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:17:49 crc kubenswrapper[4900]: E1202 15:17:49.389955 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" containerName="nova-scheduler-scheduler" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.389978 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" containerName="nova-scheduler-scheduler" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.390421 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" containerName="nova-scheduler-scheduler" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.391953 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.395133 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.399907 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.479592 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkhw\" (UniqueName: \"kubernetes.io/projected/d7c11796-f4ef-4637-8541-5b27d488f6ab-kube-api-access-vzkhw\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.479734 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.479863 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-config-data\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.582274 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzkhw\" (UniqueName: \"kubernetes.io/projected/d7c11796-f4ef-4637-8541-5b27d488f6ab-kube-api-access-vzkhw\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.582386 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.582454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-config-data\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.587480 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.587661 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-config-data\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.617774 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzkhw\" (UniqueName: \"kubernetes.io/projected/d7c11796-f4ef-4637-8541-5b27d488f6ab-kube-api-access-vzkhw\") pod \"nova-scheduler-0\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " pod="openstack/nova-scheduler-0" Dec 02 15:17:49 crc kubenswrapper[4900]: I1202 15:17:49.721321 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 15:17:50 crc kubenswrapper[4900]: W1202 15:17:50.241865 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7c11796_f4ef_4637_8541_5b27d488f6ab.slice/crio-8bc0c99c0d2dd77602fcbc46d2f33a50f97d5c84f739fea5c8996cd919fa601d WatchSource:0}: Error finding container 8bc0c99c0d2dd77602fcbc46d2f33a50f97d5c84f739fea5c8996cd919fa601d: Status 404 returned error can't find the container with id 8bc0c99c0d2dd77602fcbc46d2f33a50f97d5c84f739fea5c8996cd919fa601d Dec 02 15:17:50 crc kubenswrapper[4900]: I1202 15:17:50.245322 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 15:17:50 crc kubenswrapper[4900]: I1202 15:17:50.341959 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c11796-f4ef-4637-8541-5b27d488f6ab","Type":"ContainerStarted","Data":"8bc0c99c0d2dd77602fcbc46d2f33a50f97d5c84f739fea5c8996cd919fa601d"} Dec 02 15:17:50 crc kubenswrapper[4900]: I1202 15:17:50.740436 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:17:50 crc kubenswrapper[4900]: I1202 15:17:50.740487 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 15:17:50 crc kubenswrapper[4900]: I1202 15:17:50.921955 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402900ed-44f4-42fe-b2ff-1fb701a09cf2" path="/var/lib/kubelet/pods/402900ed-44f4-42fe-b2ff-1fb701a09cf2/volumes" Dec 02 15:17:51 crc kubenswrapper[4900]: I1202 15:17:51.368661 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c11796-f4ef-4637-8541-5b27d488f6ab","Type":"ContainerStarted","Data":"5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8"} Dec 02 15:17:51 crc kubenswrapper[4900]: I1202 15:17:51.396937 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.39691074 podStartE2EDuration="2.39691074s" podCreationTimestamp="2025-12-02 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:17:51.392931358 +0000 UTC m=+5716.808745259" watchObservedRunningTime="2025-12-02 15:17:51.39691074 +0000 UTC m=+5716.812724601" Dec 02 15:17:51 crc kubenswrapper[4900]: I1202 15:17:51.830789 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:51 crc kubenswrapper[4900]: I1202 15:17:51.847788 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:52 crc kubenswrapper[4900]: I1202 15:17:52.394812 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 02 15:17:53 crc kubenswrapper[4900]: I1202 15:17:53.622338 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 15:17:54 crc kubenswrapper[4900]: I1202 15:17:54.721684 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 15:17:54 crc kubenswrapper[4900]: I1202 15:17:54.775051 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 15:17:54 crc kubenswrapper[4900]: I1202 15:17:54.775574 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 15:17:55 crc kubenswrapper[4900]: I1202 15:17:55.740442 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 15:17:55 crc kubenswrapper[4900]: I1202 15:17:55.740548 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 15:17:55 crc kubenswrapper[4900]: I1202 15:17:55.788793 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 15:17:55 crc kubenswrapper[4900]: I1202 15:17:55.857916 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.81:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:17:55 crc kubenswrapper[4900]: I1202 15:17:55.857923 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.81:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:17:56 crc kubenswrapper[4900]: I1202 15:17:56.823015 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:17:56 crc kubenswrapper[4900]: I1202 15:17:56.823035 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 15:17:57 crc kubenswrapper[4900]: I1202 15:17:57.910614 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:17:57 crc kubenswrapper[4900]: E1202 15:17:57.911328 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.244343 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.246356 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.247967 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.263938 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.371951 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.372069 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.372103 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.372137 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z88hm\" (UniqueName: \"kubernetes.io/projected/f437502b-ceb8-45f9-90a9-e69a5d156d51-kube-api-access-z88hm\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.372197 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f437502b-ceb8-45f9-90a9-e69a5d156d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.372380 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474068 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474135 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474391 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z88hm\" (UniqueName: \"kubernetes.io/projected/f437502b-ceb8-45f9-90a9-e69a5d156d51-kube-api-access-z88hm\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474471 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f437502b-ceb8-45f9-90a9-e69a5d156d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474640 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474887 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f437502b-ceb8-45f9-90a9-e69a5d156d51-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.474915 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.481201 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.482144 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.482172 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.482702 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-scripts\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.504858 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z88hm\" (UniqueName: \"kubernetes.io/projected/f437502b-ceb8-45f9-90a9-e69a5d156d51-kube-api-access-z88hm\") pod \"cinder-scheduler-0\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " pod="openstack/cinder-scheduler-0" Dec 02 15:17:58 crc kubenswrapper[4900]: I1202 15:17:58.565053 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 15:17:59 crc kubenswrapper[4900]: W1202 15:17:59.083893 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf437502b_ceb8_45f9_90a9_e69a5d156d51.slice/crio-918b3cf0423536b7dcff20a4e8c4a0e70fa003120bafdf5a0224d8755895ffd4 WatchSource:0}: Error finding container 918b3cf0423536b7dcff20a4e8c4a0e70fa003120bafdf5a0224d8755895ffd4: Status 404 returned error can't find the container with id 918b3cf0423536b7dcff20a4e8c4a0e70fa003120bafdf5a0224d8755895ffd4 Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.085202 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.482590 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f437502b-ceb8-45f9-90a9-e69a5d156d51","Type":"ContainerStarted","Data":"918b3cf0423536b7dcff20a4e8c4a0e70fa003120bafdf5a0224d8755895ffd4"} Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.722279 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.783629 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.802222 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.802458 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api-log" containerID="cri-o://9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985" gracePeriod=30 Dec 02 15:17:59 crc kubenswrapper[4900]: I1202 15:17:59.802634 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api" containerID="cri-o://55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed" gracePeriod=30 Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.385490 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.387142 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.389627 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.420737 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.492093 4900 generic.go:334] "Generic (PLEG): container finished" podID="f503388f-4299-472a-9d90-32f62770071f" containerID="9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985" exitCode=143 Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.492190 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503388f-4299-472a-9d90-32f62770071f","Type":"ContainerDied","Data":"9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985"} Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.494592 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f437502b-ceb8-45f9-90a9-e69a5d156d51","Type":"ContainerStarted","Data":"609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d"} Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.494636 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f437502b-ceb8-45f9-90a9-e69a5d156d51","Type":"ContainerStarted","Data":"3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68"} Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.516338 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.516312926 podStartE2EDuration="2.516312926s" podCreationTimestamp="2025-12-02 15:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:18:00.512571011 +0000 UTC m=+5725.928384882" watchObservedRunningTime="2025-12-02 15:18:00.516312926 +0000 UTC m=+5725.932126777" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.516932 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmkn\" (UniqueName: \"kubernetes.io/projected/b8adcc13-3199-4c22-b50e-cb975a62c107-kube-api-access-zjmkn\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.516986 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b8adcc13-3199-4c22-b50e-cb975a62c107-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517058 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517102 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517146 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517169 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517198 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517230 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517294 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517325 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517353 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-run\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517378 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517400 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517424 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517457 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.517493 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.526409 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619082 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619126 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619147 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-run\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619176 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619191 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619209 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619242 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619284 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619314 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmkn\" (UniqueName: \"kubernetes.io/projected/b8adcc13-3199-4c22-b50e-cb975a62c107-kube-api-access-zjmkn\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619328 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b8adcc13-3199-4c22-b50e-cb975a62c107-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619380 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619402 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619441 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619456 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619479 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.619499 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620182 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620327 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620271 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620227 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620341 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620511 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620625 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620617 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.620677 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.621197 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b8adcc13-3199-4c22-b50e-cb975a62c107-run\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.625207 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.628164 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.641949 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.641957 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8adcc13-3199-4c22-b50e-cb975a62c107-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.642336 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b8adcc13-3199-4c22-b50e-cb975a62c107-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.646406 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmkn\" (UniqueName: \"kubernetes.io/projected/b8adcc13-3199-4c22-b50e-cb975a62c107-kube-api-access-zjmkn\") pod \"cinder-volume-volume1-0\" (UID: \"b8adcc13-3199-4c22-b50e-cb975a62c107\") " pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:00 crc kubenswrapper[4900]: I1202 15:18:00.716052 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.084167 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.085957 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.088085 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.114515 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.235971 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236164 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236243 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236267 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236316 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-scripts\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236362 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b2e331a7-7edb-4984-a486-00ff5463ca20-ceph\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236389 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-dev\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236430 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-lib-modules\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m695r\" (UniqueName: \"kubernetes.io/projected/b2e331a7-7edb-4984-a486-00ff5463ca20-kube-api-access-m695r\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236616 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236661 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236683 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236755 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-config-data\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236772 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-run\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236795 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.236829 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-sys\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.338990 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339085 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339141 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-scripts\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339172 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b2e331a7-7edb-4984-a486-00ff5463ca20-ceph\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339964 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-dev\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340009 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-lib-modules\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340050 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m695r\" (UniqueName: \"kubernetes.io/projected/b2e331a7-7edb-4984-a486-00ff5463ca20-kube-api-access-m695r\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340080 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340097 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340116 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340147 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-config-data\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340166 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-run\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340212 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340237 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-sys\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340314 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340390 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339185 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340447 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-dev\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.339223 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340478 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-lib-modules\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340860 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340895 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.340930 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.341823 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-sys\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.341930 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b2e331a7-7edb-4984-a486-00ff5463ca20-run\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.346450 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b2e331a7-7edb-4984-a486-00ff5463ca20-ceph\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.346663 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.352243 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.352493 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-config-data\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.353099 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e331a7-7edb-4984-a486-00ff5463ca20-scripts\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.358670 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m695r\" (UniqueName: \"kubernetes.io/projected/b2e331a7-7edb-4984-a486-00ff5463ca20-kube-api-access-m695r\") pod \"cinder-backup-0\" (UID: \"b2e331a7-7edb-4984-a486-00ff5463ca20\") " pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.402841 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.426200 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.437626 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.529734 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b8adcc13-3199-4c22-b50e-cb975a62c107","Type":"ContainerStarted","Data":"3692a9035363e6992d63f7061f8306350aeb8be7ff1fe8ed4c3ee2eb67a2be10"} Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.828956 4900 scope.go:117] "RemoveContainer" containerID="1436d82113517789e2393af3027805bbf70208b948198545bdc6c3d059ce2810" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.867844 4900 scope.go:117] "RemoveContainer" containerID="7c973b0ce29fb79f3a890252dc9950737ffe7079ed0b155407d25ac38570f223" Dec 02 15:18:01 crc kubenswrapper[4900]: I1202 15:18:01.928604 4900 scope.go:117] "RemoveContainer" containerID="6841c2eb29f71a0168182bbb5c7287ba15e59f71f304ab056b9b289d914e3e52" Dec 02 15:18:02 crc kubenswrapper[4900]: I1202 15:18:01.995412 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 02 15:18:02 crc kubenswrapper[4900]: W1202 15:18:02.015147 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e331a7_7edb_4984_a486_00ff5463ca20.slice/crio-554747a7fda3e72477658e07fe4fecb3ffe1f8de252fcfd79905b31563a57f47 WatchSource:0}: Error finding container 554747a7fda3e72477658e07fe4fecb3ffe1f8de252fcfd79905b31563a57f47: Status 404 returned error can't find the container with id 554747a7fda3e72477658e07fe4fecb3ffe1f8de252fcfd79905b31563a57f47 Dec 02 15:18:02 crc kubenswrapper[4900]: I1202 15:18:02.546447 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b2e331a7-7edb-4984-a486-00ff5463ca20","Type":"ContainerStarted","Data":"554747a7fda3e72477658e07fe4fecb3ffe1f8de252fcfd79905b31563a57f47"} Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.457568 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.565901 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.571336 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b8adcc13-3199-4c22-b50e-cb975a62c107","Type":"ContainerStarted","Data":"f91917be248b254342b9cd77816c8b2c4b59905936ea0336212243f794ea137c"} Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.576424 4900 generic.go:334] "Generic (PLEG): container finished" podID="f503388f-4299-472a-9d90-32f62770071f" containerID="55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed" exitCode=0 Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.576461 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503388f-4299-472a-9d90-32f62770071f","Type":"ContainerDied","Data":"55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed"} Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.576489 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f503388f-4299-472a-9d90-32f62770071f","Type":"ContainerDied","Data":"92cf03dbb1b3080335a48f4019a6030f5fb50dc39a42b2d1a377b10ff2624cc3"} Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.576506 4900 scope.go:117] "RemoveContainer" containerID="55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.576701 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604097 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-scripts\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604140 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604179 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503388f-4299-472a-9d90-32f62770071f-logs\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604279 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data-custom\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604325 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-combined-ca-bundle\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604361 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqmg4\" (UniqueName: \"kubernetes.io/projected/f503388f-4299-472a-9d90-32f62770071f-kube-api-access-dqmg4\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604396 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503388f-4299-472a-9d90-32f62770071f-etc-machine-id\") pod \"f503388f-4299-472a-9d90-32f62770071f\" (UID: \"f503388f-4299-472a-9d90-32f62770071f\") " Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.604817 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f503388f-4299-472a-9d90-32f62770071f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.608579 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f503388f-4299-472a-9d90-32f62770071f-logs" (OuterVolumeSpecName: "logs") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.610448 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.611383 4900 scope.go:117] "RemoveContainer" containerID="9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.612107 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f503388f-4299-472a-9d90-32f62770071f-kube-api-access-dqmg4" (OuterVolumeSpecName: "kube-api-access-dqmg4") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "kube-api-access-dqmg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.618213 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-scripts" (OuterVolumeSpecName: "scripts") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.649446 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.662164 4900 scope.go:117] "RemoveContainer" containerID="55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed" Dec 02 15:18:03 crc kubenswrapper[4900]: E1202 15:18:03.663459 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed\": container with ID starting with 55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed not found: ID does not exist" containerID="55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.663509 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed"} err="failed to get container status \"55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed\": rpc error: code = NotFound desc = could not find container \"55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed\": container with ID starting with 55543cf8588ad8b3841bec288e92dcf8bf83c3645bc6de9969e2f8d80fccf5ed not found: ID does not exist" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.663536 4900 scope.go:117] "RemoveContainer" containerID="9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985" Dec 02 15:18:03 crc kubenswrapper[4900]: E1202 15:18:03.663784 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985\": container with ID starting with 9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985 not found: ID does not exist" containerID="9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.663814 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985"} err="failed to get container status \"9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985\": rpc error: code = NotFound desc = could not find container \"9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985\": container with ID starting with 9f9aab0240dc6ae8994babf0aea213a79e85607c3414bd3b18789229a2aab985 not found: ID does not exist" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.678868 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data" (OuterVolumeSpecName: "config-data") pod "f503388f-4299-472a-9d90-32f62770071f" (UID: "f503388f-4299-472a-9d90-32f62770071f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.707992 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqmg4\" (UniqueName: \"kubernetes.io/projected/f503388f-4299-472a-9d90-32f62770071f-kube-api-access-dqmg4\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.708031 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f503388f-4299-472a-9d90-32f62770071f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.708043 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.708053 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.708063 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f503388f-4299-472a-9d90-32f62770071f-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.708074 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.708084 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f503388f-4299-472a-9d90-32f62770071f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.935911 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.962439 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.968767 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:18:03 crc kubenswrapper[4900]: E1202 15:18:03.969208 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api-log" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.969228 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api-log" Dec 02 15:18:03 crc kubenswrapper[4900]: E1202 15:18:03.969251 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.969259 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.969438 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api-log" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.969464 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f503388f-4299-472a-9d90-32f62770071f" containerName="cinder-api" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.970447 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.973410 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 15:18:03 crc kubenswrapper[4900]: I1202 15:18:03.976929 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.022873 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4x8\" (UniqueName: \"kubernetes.io/projected/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-kube-api-access-fc4x8\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.023593 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-config-data\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.023661 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.023750 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-scripts\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.024098 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-logs\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.024127 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.024165 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-config-data-custom\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.125825 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-logs\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.126916 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.127816 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-config-data-custom\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.126184 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-logs\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.127892 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4x8\" (UniqueName: \"kubernetes.io/projected/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-kube-api-access-fc4x8\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.128321 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-config-data\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.128379 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.128803 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.128809 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-scripts\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.131867 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-config-data-custom\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.132098 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.133119 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-config-data\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.133152 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-scripts\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.147443 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4x8\" (UniqueName: \"kubernetes.io/projected/b99d250f-3bdb-4c35-af5d-3ff9d38bebde-kube-api-access-fc4x8\") pod \"cinder-api-0\" (UID: \"b99d250f-3bdb-4c35-af5d-3ff9d38bebde\") " pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.303187 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.598285 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b8adcc13-3199-4c22-b50e-cb975a62c107","Type":"ContainerStarted","Data":"b81c5c26af8960d3f458507c4484d1f13922a7f3513b3e65be17709181ff78a5"} Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.603827 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b2e331a7-7edb-4984-a486-00ff5463ca20","Type":"ContainerStarted","Data":"212342fe721df118bf39214e32d226d9a8a828673c0d09b253181cf391b7a601"} Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.603874 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b2e331a7-7edb-4984-a486-00ff5463ca20","Type":"ContainerStarted","Data":"2c737a3327ac647391d98b0115bb960fa95dc67cbea8cfbb4d434c56b3d90d68"} Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.666843 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.135700656 podStartE2EDuration="4.666805455s" podCreationTimestamp="2025-12-02 15:18:00 +0000 UTC" firstStartedPulling="2025-12-02 15:18:01.437426486 +0000 UTC m=+5726.853240337" lastFinishedPulling="2025-12-02 15:18:02.968531285 +0000 UTC m=+5728.384345136" observedRunningTime="2025-12-02 15:18:04.628852529 +0000 UTC m=+5730.044666380" watchObservedRunningTime="2025-12-02 15:18:04.666805455 +0000 UTC m=+5730.082619306" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.687014 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=1.68327948 podStartE2EDuration="3.686990111s" podCreationTimestamp="2025-12-02 15:18:01 +0000 UTC" firstStartedPulling="2025-12-02 15:18:02.045001427 +0000 UTC m=+5727.460815268" lastFinishedPulling="2025-12-02 15:18:04.048712048 +0000 UTC m=+5729.464525899" observedRunningTime="2025-12-02 15:18:04.658353097 +0000 UTC m=+5730.074166948" watchObservedRunningTime="2025-12-02 15:18:04.686990111 +0000 UTC m=+5730.102803962" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.734022 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 15:18:04 crc kubenswrapper[4900]: W1202 15:18:04.746088 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99d250f_3bdb_4c35_af5d_3ff9d38bebde.slice/crio-74291e506891edb4439f45e6383470a800bd6e97f6db8d17e0c822f9d356f5d5 WatchSource:0}: Error finding container 74291e506891edb4439f45e6383470a800bd6e97f6db8d17e0c822f9d356f5d5: Status 404 returned error can't find the container with id 74291e506891edb4439f45e6383470a800bd6e97f6db8d17e0c822f9d356f5d5 Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.780870 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.781718 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.784918 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.785036 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 15:18:04 crc kubenswrapper[4900]: I1202 15:18:04.925376 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f503388f-4299-472a-9d90-32f62770071f" path="/var/lib/kubelet/pods/f503388f-4299-472a-9d90-32f62770071f/volumes" Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.615912 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b99d250f-3bdb-4c35-af5d-3ff9d38bebde","Type":"ContainerStarted","Data":"126d6b69b0c5e00899f203ea04a599c582c02203c634441f3e44254bd0678f34"} Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.616492 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.616508 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b99d250f-3bdb-4c35-af5d-3ff9d38bebde","Type":"ContainerStarted","Data":"74291e506891edb4439f45e6383470a800bd6e97f6db8d17e0c822f9d356f5d5"} Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.633366 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.716984 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.744740 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.746065 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 15:18:05 crc kubenswrapper[4900]: I1202 15:18:05.751093 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 15:18:06 crc kubenswrapper[4900]: I1202 15:18:06.403039 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 02 15:18:06 crc kubenswrapper[4900]: I1202 15:18:06.630616 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b99d250f-3bdb-4c35-af5d-3ff9d38bebde","Type":"ContainerStarted","Data":"2728da240b2b9a89ebd0f77034c964b8121d529295acc517a9b37e81e971ec6e"} Dec 02 15:18:06 crc kubenswrapper[4900]: I1202 15:18:06.634879 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 15:18:06 crc kubenswrapper[4900]: I1202 15:18:06.676668 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.676622798 podStartE2EDuration="3.676622798s" podCreationTimestamp="2025-12-02 15:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:18:06.653160889 +0000 UTC m=+5732.068974740" watchObservedRunningTime="2025-12-02 15:18:06.676622798 +0000 UTC m=+5732.092436669" Dec 02 15:18:07 crc kubenswrapper[4900]: I1202 15:18:07.645622 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 15:18:08 crc kubenswrapper[4900]: I1202 15:18:08.804519 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 15:18:08 crc kubenswrapper[4900]: I1202 15:18:08.859944 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:18:09 crc kubenswrapper[4900]: I1202 15:18:09.669579 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="cinder-scheduler" containerID="cri-o://3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68" gracePeriod=30 Dec 02 15:18:09 crc kubenswrapper[4900]: I1202 15:18:09.669744 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="probe" containerID="cri-o://609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d" gracePeriod=30 Dec 02 15:18:09 crc kubenswrapper[4900]: I1202 15:18:09.910322 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:18:09 crc kubenswrapper[4900]: E1202 15:18:09.911941 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:18:10 crc kubenswrapper[4900]: I1202 15:18:10.689717 4900 generic.go:334] "Generic (PLEG): container finished" podID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerID="609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d" exitCode=0 Dec 02 15:18:10 crc kubenswrapper[4900]: I1202 15:18:10.689766 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f437502b-ceb8-45f9-90a9-e69a5d156d51","Type":"ContainerDied","Data":"609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d"} Dec 02 15:18:10 crc kubenswrapper[4900]: I1202 15:18:10.996632 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 02 15:18:11 crc kubenswrapper[4900]: I1202 15:18:11.647790 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.547840 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.715563 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f437502b-ceb8-45f9-90a9-e69a5d156d51-etc-machine-id\") pod \"f437502b-ceb8-45f9-90a9-e69a5d156d51\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.715692 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f437502b-ceb8-45f9-90a9-e69a5d156d51-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f437502b-ceb8-45f9-90a9-e69a5d156d51" (UID: "f437502b-ceb8-45f9-90a9-e69a5d156d51"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.715762 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data\") pod \"f437502b-ceb8-45f9-90a9-e69a5d156d51\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.715797 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z88hm\" (UniqueName: \"kubernetes.io/projected/f437502b-ceb8-45f9-90a9-e69a5d156d51-kube-api-access-z88hm\") pod \"f437502b-ceb8-45f9-90a9-e69a5d156d51\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.715920 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data-custom\") pod \"f437502b-ceb8-45f9-90a9-e69a5d156d51\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.715963 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-scripts\") pod \"f437502b-ceb8-45f9-90a9-e69a5d156d51\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.716018 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-combined-ca-bundle\") pod \"f437502b-ceb8-45f9-90a9-e69a5d156d51\" (UID: \"f437502b-ceb8-45f9-90a9-e69a5d156d51\") " Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.716455 4900 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f437502b-ceb8-45f9-90a9-e69a5d156d51-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.724731 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f437502b-ceb8-45f9-90a9-e69a5d156d51" (UID: "f437502b-ceb8-45f9-90a9-e69a5d156d51"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.726178 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-scripts" (OuterVolumeSpecName: "scripts") pod "f437502b-ceb8-45f9-90a9-e69a5d156d51" (UID: "f437502b-ceb8-45f9-90a9-e69a5d156d51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.727355 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f437502b-ceb8-45f9-90a9-e69a5d156d51-kube-api-access-z88hm" (OuterVolumeSpecName: "kube-api-access-z88hm") pod "f437502b-ceb8-45f9-90a9-e69a5d156d51" (UID: "f437502b-ceb8-45f9-90a9-e69a5d156d51"). InnerVolumeSpecName "kube-api-access-z88hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.731273 4900 generic.go:334] "Generic (PLEG): container finished" podID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerID="3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68" exitCode=0 Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.731339 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.731539 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f437502b-ceb8-45f9-90a9-e69a5d156d51","Type":"ContainerDied","Data":"3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68"} Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.732514 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f437502b-ceb8-45f9-90a9-e69a5d156d51","Type":"ContainerDied","Data":"918b3cf0423536b7dcff20a4e8c4a0e70fa003120bafdf5a0224d8755895ffd4"} Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.732598 4900 scope.go:117] "RemoveContainer" containerID="609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.774215 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f437502b-ceb8-45f9-90a9-e69a5d156d51" (UID: "f437502b-ceb8-45f9-90a9-e69a5d156d51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.818334 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.818374 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z88hm\" (UniqueName: \"kubernetes.io/projected/f437502b-ceb8-45f9-90a9-e69a5d156d51-kube-api-access-z88hm\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.818393 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.818406 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.824265 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data" (OuterVolumeSpecName: "config-data") pod "f437502b-ceb8-45f9-90a9-e69a5d156d51" (UID: "f437502b-ceb8-45f9-90a9-e69a5d156d51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.837951 4900 scope.go:117] "RemoveContainer" containerID="3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.868627 4900 scope.go:117] "RemoveContainer" containerID="609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d" Dec 02 15:18:12 crc kubenswrapper[4900]: E1202 15:18:12.869101 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d\": container with ID starting with 609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d not found: ID does not exist" containerID="609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.869173 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d"} err="failed to get container status \"609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d\": rpc error: code = NotFound desc = could not find container \"609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d\": container with ID starting with 609078d6cb4420fdd09cc6a15807997834c4d6284aa6b36f371d36499726081d not found: ID does not exist" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.869200 4900 scope.go:117] "RemoveContainer" containerID="3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68" Dec 02 15:18:12 crc kubenswrapper[4900]: E1202 15:18:12.869519 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68\": container with ID starting with 3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68 not found: ID does not exist" containerID="3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.869553 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68"} err="failed to get container status \"3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68\": rpc error: code = NotFound desc = could not find container \"3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68\": container with ID starting with 3f82752e8998ffd651379734da9cb00cec4c01ba744b0eba0cedb0c48c376b68 not found: ID does not exist" Dec 02 15:18:12 crc kubenswrapper[4900]: I1202 15:18:12.919670 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f437502b-ceb8-45f9-90a9-e69a5d156d51-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.068846 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.078096 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.092594 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:18:13 crc kubenswrapper[4900]: E1202 15:18:13.093349 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="cinder-scheduler" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.093374 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="cinder-scheduler" Dec 02 15:18:13 crc kubenswrapper[4900]: E1202 15:18:13.093394 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="probe" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.093403 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="probe" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.093752 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="cinder-scheduler" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.093784 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" containerName="probe" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.095071 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.099418 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.117818 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.225281 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.225362 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.225585 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.225629 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.225734 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.226041 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-kube-api-access-7786r\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.327567 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.328208 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-kube-api-access-7786r\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.328326 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.328416 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.328483 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.328489 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.328521 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.334408 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.335392 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.336018 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.338223 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.346999 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/ee4e6e1c-8df1-4cec-be59-c6f7cb764f15-kube-api-access-7786r\") pod \"cinder-scheduler-0\" (UID: \"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15\") " pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.453390 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 15:18:13 crc kubenswrapper[4900]: I1202 15:18:13.964398 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 15:18:13 crc kubenswrapper[4900]: W1202 15:18:13.967707 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee4e6e1c_8df1_4cec_be59_c6f7cb764f15.slice/crio-cd22e32d04a281d1a275bd2d217e2deb22dd81a2ab418b70bd0fe5098621f577 WatchSource:0}: Error finding container cd22e32d04a281d1a275bd2d217e2deb22dd81a2ab418b70bd0fe5098621f577: Status 404 returned error can't find the container with id cd22e32d04a281d1a275bd2d217e2deb22dd81a2ab418b70bd0fe5098621f577 Dec 02 15:18:14 crc kubenswrapper[4900]: I1202 15:18:14.757884 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15","Type":"ContainerStarted","Data":"a480a2abaef60827ca5b6ec7d355a8383dbc6a2f8adb7d9c0b75e706e9a7309f"} Dec 02 15:18:14 crc kubenswrapper[4900]: I1202 15:18:14.758220 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15","Type":"ContainerStarted","Data":"cd22e32d04a281d1a275bd2d217e2deb22dd81a2ab418b70bd0fe5098621f577"} Dec 02 15:18:14 crc kubenswrapper[4900]: I1202 15:18:14.929030 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f437502b-ceb8-45f9-90a9-e69a5d156d51" path="/var/lib/kubelet/pods/f437502b-ceb8-45f9-90a9-e69a5d156d51/volumes" Dec 02 15:18:15 crc kubenswrapper[4900]: I1202 15:18:15.772593 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee4e6e1c-8df1-4cec-be59-c6f7cb764f15","Type":"ContainerStarted","Data":"691c6d918bbbbddd27ef644d891d7e91273fa078df9ed4b2f29f2d8458744915"} Dec 02 15:18:15 crc kubenswrapper[4900]: I1202 15:18:15.809562 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.809548023 podStartE2EDuration="2.809548023s" podCreationTimestamp="2025-12-02 15:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:18:15.804500921 +0000 UTC m=+5741.220314772" watchObservedRunningTime="2025-12-02 15:18:15.809548023 +0000 UTC m=+5741.225361874" Dec 02 15:18:16 crc kubenswrapper[4900]: I1202 15:18:16.302562 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 02 15:18:18 crc kubenswrapper[4900]: I1202 15:18:18.453677 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 15:18:20 crc kubenswrapper[4900]: I1202 15:18:20.911101 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:18:20 crc kubenswrapper[4900]: E1202 15:18:20.911820 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:18:23 crc kubenswrapper[4900]: I1202 15:18:23.658596 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 15:18:32 crc kubenswrapper[4900]: I1202 15:18:32.911391 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:18:32 crc kubenswrapper[4900]: E1202 15:18:32.912691 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:18:47 crc kubenswrapper[4900]: I1202 15:18:47.910417 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:18:47 crc kubenswrapper[4900]: E1202 15:18:47.911216 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:19:01 crc kubenswrapper[4900]: I1202 15:19:01.910462 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:19:01 crc kubenswrapper[4900]: E1202 15:19:01.911634 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:19:02 crc kubenswrapper[4900]: I1202 15:19:02.148400 4900 scope.go:117] "RemoveContainer" containerID="982ebc35218e99919d6191c10cd75f441ed4a42ee5c7787b236bab706c1a00a9" Dec 02 15:19:02 crc kubenswrapper[4900]: I1202 15:19:02.173405 4900 scope.go:117] "RemoveContainer" containerID="55bd410d4a2d14c639587c8b021b11f0faa95be7f17c21908a0c981eee9375bb" Dec 02 15:19:13 crc kubenswrapper[4900]: I1202 15:19:13.910623 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:19:13 crc kubenswrapper[4900]: E1202 15:19:13.911607 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:19:25 crc kubenswrapper[4900]: I1202 15:19:25.909564 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:19:25 crc kubenswrapper[4900]: E1202 15:19:25.910402 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:19:36 crc kubenswrapper[4900]: I1202 15:19:36.910312 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:19:36 crc kubenswrapper[4900]: E1202 15:19:36.911470 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:19:48 crc kubenswrapper[4900]: I1202 15:19:48.910463 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:19:48 crc kubenswrapper[4900]: E1202 15:19:48.911128 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:19:59 crc kubenswrapper[4900]: I1202 15:19:59.911346 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:19:59 crc kubenswrapper[4900]: E1202 15:19:59.912763 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.060705 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-glzwh"] Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.063254 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.064911 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.070898 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gd455" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.079062 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2vsfp"] Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.081483 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.122251 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-glzwh"] Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142495 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-run\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142560 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-log-ovn\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142585 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-etc-ovs\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142657 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp4fg\" (UniqueName: \"kubernetes.io/projected/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-kube-api-access-hp4fg\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142688 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-lib\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142709 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-run\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142727 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40b95209-a40e-45db-bc19-a3ae870ce6ce-scripts\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142856 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zdv\" (UniqueName: \"kubernetes.io/projected/40b95209-a40e-45db-bc19-a3ae870ce6ce-kube-api-access-87zdv\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.142917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-log\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.143002 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-run-ovn\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.143121 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-scripts\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.154999 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2vsfp"] Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.244727 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40b95209-a40e-45db-bc19-a3ae870ce6ce-scripts\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.244825 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zdv\" (UniqueName: \"kubernetes.io/projected/40b95209-a40e-45db-bc19-a3ae870ce6ce-kube-api-access-87zdv\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.244856 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-log\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.244900 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-run-ovn\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.244953 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-scripts\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.244982 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-run\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.245006 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-log-ovn\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.245024 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-etc-ovs\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.245051 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp4fg\" (UniqueName: \"kubernetes.io/projected/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-kube-api-access-hp4fg\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.245077 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-lib\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.245094 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-run\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.245416 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-run\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.247379 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40b95209-a40e-45db-bc19-a3ae870ce6ce-scripts\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.247734 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-log\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.247785 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-run-ovn\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.248726 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-etc-ovs\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.248776 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-run\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.248890 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/40b95209-a40e-45db-bc19-a3ae870ce6ce-var-lib\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.248893 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-var-log-ovn\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.249556 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-scripts\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.263961 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zdv\" (UniqueName: \"kubernetes.io/projected/40b95209-a40e-45db-bc19-a3ae870ce6ce-kube-api-access-87zdv\") pod \"ovn-controller-ovs-2vsfp\" (UID: \"40b95209-a40e-45db-bc19-a3ae870ce6ce\") " pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.265713 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp4fg\" (UniqueName: \"kubernetes.io/projected/07f3aa4d-40c4-45df-a374-1e2e908f7e3b-kube-api-access-hp4fg\") pod \"ovn-controller-glzwh\" (UID: \"07f3aa4d-40c4-45df-a374-1e2e908f7e3b\") " pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.383270 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.437338 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:05 crc kubenswrapper[4900]: I1202 15:20:05.859593 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-glzwh"] Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.097588 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh" event={"ID":"07f3aa4d-40c4-45df-a374-1e2e908f7e3b","Type":"ContainerStarted","Data":"2a01bb9199f3c7926c8a3616754f66e184fbea4810a5ab17802f4cce5353664b"} Dec 02 15:20:06 crc kubenswrapper[4900]: W1202 15:20:06.227570 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40b95209_a40e_45db_bc19_a3ae870ce6ce.slice/crio-724a6d580c2848d4fba848105521fbd6f0fdf63031d8e762804a52040786845c WatchSource:0}: Error finding container 724a6d580c2848d4fba848105521fbd6f0fdf63031d8e762804a52040786845c: Status 404 returned error can't find the container with id 724a6d580c2848d4fba848105521fbd6f0fdf63031d8e762804a52040786845c Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.231124 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2vsfp"] Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.631779 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8zt6j"] Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.633400 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.637244 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.646206 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8zt6j"] Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.680086 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cjpq\" (UniqueName: \"kubernetes.io/projected/f7ebaa7d-56ce-4de1-954e-c478bb64871c-kube-api-access-6cjpq\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.680131 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ebaa7d-56ce-4de1-954e-c478bb64871c-config\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.680165 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7ebaa7d-56ce-4de1-954e-c478bb64871c-ovs-rundir\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.680240 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7ebaa7d-56ce-4de1-954e-c478bb64871c-ovn-rundir\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.781700 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7ebaa7d-56ce-4de1-954e-c478bb64871c-ovn-rundir\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.781985 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7ebaa7d-56ce-4de1-954e-c478bb64871c-ovn-rundir\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.782548 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cjpq\" (UniqueName: \"kubernetes.io/projected/f7ebaa7d-56ce-4de1-954e-c478bb64871c-kube-api-access-6cjpq\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.782596 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ebaa7d-56ce-4de1-954e-c478bb64871c-config\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.782680 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7ebaa7d-56ce-4de1-954e-c478bb64871c-ovs-rundir\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.782886 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7ebaa7d-56ce-4de1-954e-c478bb64871c-ovs-rundir\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.783373 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ebaa7d-56ce-4de1-954e-c478bb64871c-config\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.801129 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cjpq\" (UniqueName: \"kubernetes.io/projected/f7ebaa7d-56ce-4de1-954e-c478bb64871c-kube-api-access-6cjpq\") pod \"ovn-controller-metrics-8zt6j\" (UID: \"f7ebaa7d-56ce-4de1-954e-c478bb64871c\") " pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:06 crc kubenswrapper[4900]: I1202 15:20:06.958142 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8zt6j" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.116012 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rj9xn"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.127504 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-13d1-account-create-update-wj6rs"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.128869 4900 generic.go:334] "Generic (PLEG): container finished" podID="40b95209-a40e-45db-bc19-a3ae870ce6ce" containerID="ac1e9ec043546c201d78d5785666ea33c63e76d1199c9ad597fe5e52d3bcc5be" exitCode=0 Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.128959 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2vsfp" event={"ID":"40b95209-a40e-45db-bc19-a3ae870ce6ce","Type":"ContainerDied","Data":"ac1e9ec043546c201d78d5785666ea33c63e76d1199c9ad597fe5e52d3bcc5be"} Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.128988 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2vsfp" event={"ID":"40b95209-a40e-45db-bc19-a3ae870ce6ce","Type":"ContainerStarted","Data":"724a6d580c2848d4fba848105521fbd6f0fdf63031d8e762804a52040786845c"} Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.132399 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh" event={"ID":"07f3aa4d-40c4-45df-a374-1e2e908f7e3b","Type":"ContainerStarted","Data":"94de7e72341d678d097fae72a091532eef93ee4d7a327d59dd9061e3b6c4078d"} Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.132535 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-glzwh" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.141184 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rj9xn"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.159725 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-13d1-account-create-update-wj6rs"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.213962 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-47ngw"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.215476 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.226695 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-47ngw"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.236156 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-glzwh" podStartSLOduration=2.236135933 podStartE2EDuration="2.236135933s" podCreationTimestamp="2025-12-02 15:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:20:07.185136662 +0000 UTC m=+5852.600950523" watchObservedRunningTime="2025-12-02 15:20:07.236135933 +0000 UTC m=+5852.651949784" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.301804 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f51f9e-0c86-4624-9080-7638e95cf27e-operator-scripts\") pod \"octavia-db-create-47ngw\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.302178 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmk4\" (UniqueName: \"kubernetes.io/projected/f2f51f9e-0c86-4624-9080-7638e95cf27e-kube-api-access-bkmk4\") pod \"octavia-db-create-47ngw\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.403863 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f51f9e-0c86-4624-9080-7638e95cf27e-operator-scripts\") pod \"octavia-db-create-47ngw\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.404002 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmk4\" (UniqueName: \"kubernetes.io/projected/f2f51f9e-0c86-4624-9080-7638e95cf27e-kube-api-access-bkmk4\") pod \"octavia-db-create-47ngw\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.405078 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f51f9e-0c86-4624-9080-7638e95cf27e-operator-scripts\") pod \"octavia-db-create-47ngw\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.424405 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmk4\" (UniqueName: \"kubernetes.io/projected/f2f51f9e-0c86-4624-9080-7638e95cf27e-kube-api-access-bkmk4\") pod \"octavia-db-create-47ngw\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.499447 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8zt6j"] Dec 02 15:20:07 crc kubenswrapper[4900]: I1202 15:20:07.537990 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.092470 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-47ngw"] Dec 02 15:20:08 crc kubenswrapper[4900]: W1202 15:20:08.095176 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2f51f9e_0c86_4624_9080_7638e95cf27e.slice/crio-c530e564980d4a9dda02fb710d1fe79bf4da2fba31d13b5710ef9a6edd1aae30 WatchSource:0}: Error finding container c530e564980d4a9dda02fb710d1fe79bf4da2fba31d13b5710ef9a6edd1aae30: Status 404 returned error can't find the container with id c530e564980d4a9dda02fb710d1fe79bf4da2fba31d13b5710ef9a6edd1aae30 Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.144440 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2vsfp" event={"ID":"40b95209-a40e-45db-bc19-a3ae870ce6ce","Type":"ContainerStarted","Data":"89650dab852adf590e0f66f74e600d20c2306158f2019c20d9f7ee62a343c3db"} Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.146069 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2vsfp" event={"ID":"40b95209-a40e-45db-bc19-a3ae870ce6ce","Type":"ContainerStarted","Data":"1728031b3d13f051052f1e61a9d529608b641c155a5d6bd1236cd410cd12e560"} Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.146093 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.146104 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.146915 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8zt6j" event={"ID":"f7ebaa7d-56ce-4de1-954e-c478bb64871c","Type":"ContainerStarted","Data":"a5bc9afd18126a017116d6b1b5f125d3921ed8b0e0dff284c149d96ef78da12d"} Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.146971 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8zt6j" event={"ID":"f7ebaa7d-56ce-4de1-954e-c478bb64871c","Type":"ContainerStarted","Data":"1856ff20bc073b9fbf11a7a52563b44584bc077a57e7d8429f31b567354dad30"} Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.148335 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-47ngw" event={"ID":"f2f51f9e-0c86-4624-9080-7638e95cf27e","Type":"ContainerStarted","Data":"c530e564980d4a9dda02fb710d1fe79bf4da2fba31d13b5710ef9a6edd1aae30"} Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.173319 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2vsfp" podStartSLOduration=3.173299534 podStartE2EDuration="3.173299534s" podCreationTimestamp="2025-12-02 15:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:20:08.160412332 +0000 UTC m=+5853.576226183" watchObservedRunningTime="2025-12-02 15:20:08.173299534 +0000 UTC m=+5853.589113375" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.199604 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8zt6j" podStartSLOduration=2.199579581 podStartE2EDuration="2.199579581s" podCreationTimestamp="2025-12-02 15:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:20:08.18527681 +0000 UTC m=+5853.601090661" watchObservedRunningTime="2025-12-02 15:20:08.199579581 +0000 UTC m=+5853.615393432" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.487272 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-c45b-account-create-update-jp8vb"] Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.488897 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.492734 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.497184 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c45b-account-create-update-jp8vb"] Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.526421 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbbbb\" (UniqueName: \"kubernetes.io/projected/6576ef94-c3a0-4392-91d2-935f84cda6c5-kube-api-access-kbbbb\") pod \"octavia-c45b-account-create-update-jp8vb\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.526548 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6576ef94-c3a0-4392-91d2-935f84cda6c5-operator-scripts\") pod \"octavia-c45b-account-create-update-jp8vb\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.628203 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6576ef94-c3a0-4392-91d2-935f84cda6c5-operator-scripts\") pod \"octavia-c45b-account-create-update-jp8vb\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.628589 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbbbb\" (UniqueName: \"kubernetes.io/projected/6576ef94-c3a0-4392-91d2-935f84cda6c5-kube-api-access-kbbbb\") pod \"octavia-c45b-account-create-update-jp8vb\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.628959 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6576ef94-c3a0-4392-91d2-935f84cda6c5-operator-scripts\") pod \"octavia-c45b-account-create-update-jp8vb\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.646904 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbbbb\" (UniqueName: \"kubernetes.io/projected/6576ef94-c3a0-4392-91d2-935f84cda6c5-kube-api-access-kbbbb\") pod \"octavia-c45b-account-create-update-jp8vb\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.835330 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.923435 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c7fd27-2e6f-4c2c-a72a-06172797e12f" path="/var/lib/kubelet/pods/41c7fd27-2e6f-4c2c-a72a-06172797e12f/volumes" Dec 02 15:20:08 crc kubenswrapper[4900]: I1202 15:20:08.924521 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace928ef-1e0e-4667-b1e0-0050528071f2" path="/var/lib/kubelet/pods/ace928ef-1e0e-4667-b1e0-0050528071f2/volumes" Dec 02 15:20:09 crc kubenswrapper[4900]: I1202 15:20:09.162367 4900 generic.go:334] "Generic (PLEG): container finished" podID="f2f51f9e-0c86-4624-9080-7638e95cf27e" containerID="b8a1704d7fef8b833276889b3bd53dcb0b87b67c70e2f595cbf0b2a4c5ecd51a" exitCode=0 Dec 02 15:20:09 crc kubenswrapper[4900]: I1202 15:20:09.162462 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-47ngw" event={"ID":"f2f51f9e-0c86-4624-9080-7638e95cf27e","Type":"ContainerDied","Data":"b8a1704d7fef8b833276889b3bd53dcb0b87b67c70e2f595cbf0b2a4c5ecd51a"} Dec 02 15:20:09 crc kubenswrapper[4900]: I1202 15:20:09.324190 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c45b-account-create-update-jp8vb"] Dec 02 15:20:09 crc kubenswrapper[4900]: W1202 15:20:09.332903 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6576ef94_c3a0_4392_91d2_935f84cda6c5.slice/crio-4a8a3096576b944a32ee7c0c5c9ffb5b2eed2d4fc703aba5a10c0055a0eb6785 WatchSource:0}: Error finding container 4a8a3096576b944a32ee7c0c5c9ffb5b2eed2d4fc703aba5a10c0055a0eb6785: Status 404 returned error can't find the container with id 4a8a3096576b944a32ee7c0c5c9ffb5b2eed2d4fc703aba5a10c0055a0eb6785 Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.175074 4900 generic.go:334] "Generic (PLEG): container finished" podID="6576ef94-c3a0-4392-91d2-935f84cda6c5" containerID="28dac865f8ba085fdf607d13dc692464463829cb5723c325d73af3ce03d1bb0a" exitCode=0 Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.175169 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c45b-account-create-update-jp8vb" event={"ID":"6576ef94-c3a0-4392-91d2-935f84cda6c5","Type":"ContainerDied","Data":"28dac865f8ba085fdf607d13dc692464463829cb5723c325d73af3ce03d1bb0a"} Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.176246 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c45b-account-create-update-jp8vb" event={"ID":"6576ef94-c3a0-4392-91d2-935f84cda6c5","Type":"ContainerStarted","Data":"4a8a3096576b944a32ee7c0c5c9ffb5b2eed2d4fc703aba5a10c0055a0eb6785"} Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.686461 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.773087 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkmk4\" (UniqueName: \"kubernetes.io/projected/f2f51f9e-0c86-4624-9080-7638e95cf27e-kube-api-access-bkmk4\") pod \"f2f51f9e-0c86-4624-9080-7638e95cf27e\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.773204 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f51f9e-0c86-4624-9080-7638e95cf27e-operator-scripts\") pod \"f2f51f9e-0c86-4624-9080-7638e95cf27e\" (UID: \"f2f51f9e-0c86-4624-9080-7638e95cf27e\") " Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.773878 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f51f9e-0c86-4624-9080-7638e95cf27e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2f51f9e-0c86-4624-9080-7638e95cf27e" (UID: "f2f51f9e-0c86-4624-9080-7638e95cf27e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.774409 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f51f9e-0c86-4624-9080-7638e95cf27e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.781871 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f51f9e-0c86-4624-9080-7638e95cf27e-kube-api-access-bkmk4" (OuterVolumeSpecName: "kube-api-access-bkmk4") pod "f2f51f9e-0c86-4624-9080-7638e95cf27e" (UID: "f2f51f9e-0c86-4624-9080-7638e95cf27e"). InnerVolumeSpecName "kube-api-access-bkmk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:20:10 crc kubenswrapper[4900]: I1202 15:20:10.876139 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkmk4\" (UniqueName: \"kubernetes.io/projected/f2f51f9e-0c86-4624-9080-7638e95cf27e-kube-api-access-bkmk4\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.215412 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-47ngw" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.216040 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-47ngw" event={"ID":"f2f51f9e-0c86-4624-9080-7638e95cf27e","Type":"ContainerDied","Data":"c530e564980d4a9dda02fb710d1fe79bf4da2fba31d13b5710ef9a6edd1aae30"} Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.216081 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c530e564980d4a9dda02fb710d1fe79bf4da2fba31d13b5710ef9a6edd1aae30" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.696920 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.825037 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbbbb\" (UniqueName: \"kubernetes.io/projected/6576ef94-c3a0-4392-91d2-935f84cda6c5-kube-api-access-kbbbb\") pod \"6576ef94-c3a0-4392-91d2-935f84cda6c5\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.825106 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6576ef94-c3a0-4392-91d2-935f84cda6c5-operator-scripts\") pod \"6576ef94-c3a0-4392-91d2-935f84cda6c5\" (UID: \"6576ef94-c3a0-4392-91d2-935f84cda6c5\") " Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.826384 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6576ef94-c3a0-4392-91d2-935f84cda6c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6576ef94-c3a0-4392-91d2-935f84cda6c5" (UID: "6576ef94-c3a0-4392-91d2-935f84cda6c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.826717 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6576ef94-c3a0-4392-91d2-935f84cda6c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.829256 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6576ef94-c3a0-4392-91d2-935f84cda6c5-kube-api-access-kbbbb" (OuterVolumeSpecName: "kube-api-access-kbbbb") pod "6576ef94-c3a0-4392-91d2-935f84cda6c5" (UID: "6576ef94-c3a0-4392-91d2-935f84cda6c5"). InnerVolumeSpecName "kube-api-access-kbbbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:20:11 crc kubenswrapper[4900]: I1202 15:20:11.929368 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbbbb\" (UniqueName: \"kubernetes.io/projected/6576ef94-c3a0-4392-91d2-935f84cda6c5-kube-api-access-kbbbb\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:12 crc kubenswrapper[4900]: I1202 15:20:12.226586 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c45b-account-create-update-jp8vb" event={"ID":"6576ef94-c3a0-4392-91d2-935f84cda6c5","Type":"ContainerDied","Data":"4a8a3096576b944a32ee7c0c5c9ffb5b2eed2d4fc703aba5a10c0055a0eb6785"} Dec 02 15:20:12 crc kubenswrapper[4900]: I1202 15:20:12.226632 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a8a3096576b944a32ee7c0c5c9ffb5b2eed2d4fc703aba5a10c0055a0eb6785" Dec 02 15:20:12 crc kubenswrapper[4900]: I1202 15:20:12.226637 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c45b-account-create-update-jp8vb" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.762522 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-8992z"] Dec 02 15:20:13 crc kubenswrapper[4900]: E1202 15:20:13.763571 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f51f9e-0c86-4624-9080-7638e95cf27e" containerName="mariadb-database-create" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.763597 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f51f9e-0c86-4624-9080-7638e95cf27e" containerName="mariadb-database-create" Dec 02 15:20:13 crc kubenswrapper[4900]: E1202 15:20:13.763702 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6576ef94-c3a0-4392-91d2-935f84cda6c5" containerName="mariadb-account-create-update" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.763719 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6576ef94-c3a0-4392-91d2-935f84cda6c5" containerName="mariadb-account-create-update" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.764058 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6576ef94-c3a0-4392-91d2-935f84cda6c5" containerName="mariadb-account-create-update" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.764088 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f51f9e-0c86-4624-9080-7638e95cf27e" containerName="mariadb-database-create" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.765224 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.772379 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-8992z"] Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.870803 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4953e279-5ba5-4fad-9b71-f4baa47a27c2-operator-scripts\") pod \"octavia-persistence-db-create-8992z\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.870903 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4b7q\" (UniqueName: \"kubernetes.io/projected/4953e279-5ba5-4fad-9b71-f4baa47a27c2-kube-api-access-k4b7q\") pod \"octavia-persistence-db-create-8992z\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.910667 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:20:13 crc kubenswrapper[4900]: E1202 15:20:13.911047 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.972355 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4b7q\" (UniqueName: \"kubernetes.io/projected/4953e279-5ba5-4fad-9b71-f4baa47a27c2-kube-api-access-k4b7q\") pod \"octavia-persistence-db-create-8992z\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.972491 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4953e279-5ba5-4fad-9b71-f4baa47a27c2-operator-scripts\") pod \"octavia-persistence-db-create-8992z\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.973212 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4953e279-5ba5-4fad-9b71-f4baa47a27c2-operator-scripts\") pod \"octavia-persistence-db-create-8992z\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:13 crc kubenswrapper[4900]: I1202 15:20:13.995858 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4b7q\" (UniqueName: \"kubernetes.io/projected/4953e279-5ba5-4fad-9b71-f4baa47a27c2-kube-api-access-k4b7q\") pod \"octavia-persistence-db-create-8992z\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.038115 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dwq6m"] Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.052307 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dwq6m"] Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.117231 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.634163 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-8992z"] Dec 02 15:20:14 crc kubenswrapper[4900]: W1202 15:20:14.635210 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4953e279_5ba5_4fad_9b71_f4baa47a27c2.slice/crio-3c018f50a43264edd6647e14334000f8b6e1bd905fb7efdf125c648e0bbda4e5 WatchSource:0}: Error finding container 3c018f50a43264edd6647e14334000f8b6e1bd905fb7efdf125c648e0bbda4e5: Status 404 returned error can't find the container with id 3c018f50a43264edd6647e14334000f8b6e1bd905fb7efdf125c648e0bbda4e5 Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.813192 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-27a3-account-create-update-fw4gf"] Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.814809 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.817467 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.831134 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-27a3-account-create-update-fw4gf"] Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.889984 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157d9be1-babc-42f1-81f4-affacc965d19-operator-scripts\") pod \"octavia-27a3-account-create-update-fw4gf\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.890303 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6br\" (UniqueName: \"kubernetes.io/projected/157d9be1-babc-42f1-81f4-affacc965d19-kube-api-access-9n6br\") pod \"octavia-27a3-account-create-update-fw4gf\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.924369 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fde6f6d-851f-4370-a884-1f81c7ca4f15" path="/var/lib/kubelet/pods/3fde6f6d-851f-4370-a884-1f81c7ca4f15/volumes" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.992688 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157d9be1-babc-42f1-81f4-affacc965d19-operator-scripts\") pod \"octavia-27a3-account-create-update-fw4gf\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.992831 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6br\" (UniqueName: \"kubernetes.io/projected/157d9be1-babc-42f1-81f4-affacc965d19-kube-api-access-9n6br\") pod \"octavia-27a3-account-create-update-fw4gf\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:14 crc kubenswrapper[4900]: I1202 15:20:14.995149 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157d9be1-babc-42f1-81f4-affacc965d19-operator-scripts\") pod \"octavia-27a3-account-create-update-fw4gf\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:15 crc kubenswrapper[4900]: I1202 15:20:15.018473 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6br\" (UniqueName: \"kubernetes.io/projected/157d9be1-babc-42f1-81f4-affacc965d19-kube-api-access-9n6br\") pod \"octavia-27a3-account-create-update-fw4gf\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:15 crc kubenswrapper[4900]: I1202 15:20:15.142262 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:15 crc kubenswrapper[4900]: I1202 15:20:15.269090 4900 generic.go:334] "Generic (PLEG): container finished" podID="4953e279-5ba5-4fad-9b71-f4baa47a27c2" containerID="30557a7ab552f8030f0c67948c719f3a618bb09a138d1bb8f7000c7b5c9f93d1" exitCode=0 Dec 02 15:20:15 crc kubenswrapper[4900]: I1202 15:20:15.269140 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-8992z" event={"ID":"4953e279-5ba5-4fad-9b71-f4baa47a27c2","Type":"ContainerDied","Data":"30557a7ab552f8030f0c67948c719f3a618bb09a138d1bb8f7000c7b5c9f93d1"} Dec 02 15:20:15 crc kubenswrapper[4900]: I1202 15:20:15.269168 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-8992z" event={"ID":"4953e279-5ba5-4fad-9b71-f4baa47a27c2","Type":"ContainerStarted","Data":"3c018f50a43264edd6647e14334000f8b6e1bd905fb7efdf125c648e0bbda4e5"} Dec 02 15:20:15 crc kubenswrapper[4900]: I1202 15:20:15.582772 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-27a3-account-create-update-fw4gf"] Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.284344 4900 generic.go:334] "Generic (PLEG): container finished" podID="157d9be1-babc-42f1-81f4-affacc965d19" containerID="ed4cc4901ae1e401b8bb8e05ac1d4e79935d8a4bd4b6219f79bc280f657e5b9a" exitCode=0 Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.284516 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-27a3-account-create-update-fw4gf" event={"ID":"157d9be1-babc-42f1-81f4-affacc965d19","Type":"ContainerDied","Data":"ed4cc4901ae1e401b8bb8e05ac1d4e79935d8a4bd4b6219f79bc280f657e5b9a"} Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.285124 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-27a3-account-create-update-fw4gf" event={"ID":"157d9be1-babc-42f1-81f4-affacc965d19","Type":"ContainerStarted","Data":"c9140984ee1feca42b4b6357c3030f2de2d596e88c71e826917a3954f3cd94a0"} Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.688831 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.725146 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4b7q\" (UniqueName: \"kubernetes.io/projected/4953e279-5ba5-4fad-9b71-f4baa47a27c2-kube-api-access-k4b7q\") pod \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.725236 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4953e279-5ba5-4fad-9b71-f4baa47a27c2-operator-scripts\") pod \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\" (UID: \"4953e279-5ba5-4fad-9b71-f4baa47a27c2\") " Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.726196 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4953e279-5ba5-4fad-9b71-f4baa47a27c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4953e279-5ba5-4fad-9b71-f4baa47a27c2" (UID: "4953e279-5ba5-4fad-9b71-f4baa47a27c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.731714 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4953e279-5ba5-4fad-9b71-f4baa47a27c2-kube-api-access-k4b7q" (OuterVolumeSpecName: "kube-api-access-k4b7q") pod "4953e279-5ba5-4fad-9b71-f4baa47a27c2" (UID: "4953e279-5ba5-4fad-9b71-f4baa47a27c2"). InnerVolumeSpecName "kube-api-access-k4b7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.827497 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4b7q\" (UniqueName: \"kubernetes.io/projected/4953e279-5ba5-4fad-9b71-f4baa47a27c2-kube-api-access-k4b7q\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:16 crc kubenswrapper[4900]: I1202 15:20:16.827546 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4953e279-5ba5-4fad-9b71-f4baa47a27c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.298854 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-8992z" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.298861 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-8992z" event={"ID":"4953e279-5ba5-4fad-9b71-f4baa47a27c2","Type":"ContainerDied","Data":"3c018f50a43264edd6647e14334000f8b6e1bd905fb7efdf125c648e0bbda4e5"} Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.299333 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c018f50a43264edd6647e14334000f8b6e1bd905fb7efdf125c648e0bbda4e5" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.766530 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.847235 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157d9be1-babc-42f1-81f4-affacc965d19-operator-scripts\") pod \"157d9be1-babc-42f1-81f4-affacc965d19\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.847535 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6br\" (UniqueName: \"kubernetes.io/projected/157d9be1-babc-42f1-81f4-affacc965d19-kube-api-access-9n6br\") pod \"157d9be1-babc-42f1-81f4-affacc965d19\" (UID: \"157d9be1-babc-42f1-81f4-affacc965d19\") " Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.847822 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/157d9be1-babc-42f1-81f4-affacc965d19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "157d9be1-babc-42f1-81f4-affacc965d19" (UID: "157d9be1-babc-42f1-81f4-affacc965d19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.848050 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157d9be1-babc-42f1-81f4-affacc965d19-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.855284 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157d9be1-babc-42f1-81f4-affacc965d19-kube-api-access-9n6br" (OuterVolumeSpecName: "kube-api-access-9n6br") pod "157d9be1-babc-42f1-81f4-affacc965d19" (UID: "157d9be1-babc-42f1-81f4-affacc965d19"). InnerVolumeSpecName "kube-api-access-9n6br". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:20:17 crc kubenswrapper[4900]: I1202 15:20:17.949627 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6br\" (UniqueName: \"kubernetes.io/projected/157d9be1-babc-42f1-81f4-affacc965d19-kube-api-access-9n6br\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:18 crc kubenswrapper[4900]: I1202 15:20:18.311512 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-27a3-account-create-update-fw4gf" event={"ID":"157d9be1-babc-42f1-81f4-affacc965d19","Type":"ContainerDied","Data":"c9140984ee1feca42b4b6357c3030f2de2d596e88c71e826917a3954f3cd94a0"} Dec 02 15:20:18 crc kubenswrapper[4900]: I1202 15:20:18.311984 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9140984ee1feca42b4b6357c3030f2de2d596e88c71e826917a3954f3cd94a0" Dec 02 15:20:18 crc kubenswrapper[4900]: I1202 15:20:18.311614 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-27a3-account-create-update-fw4gf" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.006714 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-757f5bc974-vgzhx"] Dec 02 15:20:21 crc kubenswrapper[4900]: E1202 15:20:21.007360 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157d9be1-babc-42f1-81f4-affacc965d19" containerName="mariadb-account-create-update" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.007371 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="157d9be1-babc-42f1-81f4-affacc965d19" containerName="mariadb-account-create-update" Dec 02 15:20:21 crc kubenswrapper[4900]: E1202 15:20:21.007410 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4953e279-5ba5-4fad-9b71-f4baa47a27c2" containerName="mariadb-database-create" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.007417 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4953e279-5ba5-4fad-9b71-f4baa47a27c2" containerName="mariadb-database-create" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.007612 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="157d9be1-babc-42f1-81f4-affacc965d19" containerName="mariadb-account-create-update" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.007623 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="4953e279-5ba5-4fad-9b71-f4baa47a27c2" containerName="mariadb-database-create" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.009017 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.016821 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-757f5bc974-vgzhx"] Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.016988 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.023141 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.023376 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-5twv4" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.028951 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/71c00568-6d73-4684-ba1e-010757ff1e63-octavia-run\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.029099 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-combined-ca-bundle\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.029141 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-config-data\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.029251 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-scripts\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.029300 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/71c00568-6d73-4684-ba1e-010757ff1e63-config-data-merged\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.130691 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/71c00568-6d73-4684-ba1e-010757ff1e63-octavia-run\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.131066 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-combined-ca-bundle\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.131100 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-config-data\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.131157 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-scripts\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.131190 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/71c00568-6d73-4684-ba1e-010757ff1e63-config-data-merged\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.131233 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/71c00568-6d73-4684-ba1e-010757ff1e63-octavia-run\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.131583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/71c00568-6d73-4684-ba1e-010757ff1e63-config-data-merged\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.141119 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-config-data\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.142723 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-scripts\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.144301 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c00568-6d73-4684-ba1e-010757ff1e63-combined-ca-bundle\") pod \"octavia-api-757f5bc974-vgzhx\" (UID: \"71c00568-6d73-4684-ba1e-010757ff1e63\") " pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.358908 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:21 crc kubenswrapper[4900]: I1202 15:20:21.873749 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-757f5bc974-vgzhx"] Dec 02 15:20:21 crc kubenswrapper[4900]: W1202 15:20:21.877352 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c00568_6d73_4684_ba1e_010757ff1e63.slice/crio-8e15e535d87541a1fee200e927833f4682a63513beebd31181196fe0a8ef721d WatchSource:0}: Error finding container 8e15e535d87541a1fee200e927833f4682a63513beebd31181196fe0a8ef721d: Status 404 returned error can't find the container with id 8e15e535d87541a1fee200e927833f4682a63513beebd31181196fe0a8ef721d Dec 02 15:20:22 crc kubenswrapper[4900]: I1202 15:20:22.344549 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-757f5bc974-vgzhx" event={"ID":"71c00568-6d73-4684-ba1e-010757ff1e63","Type":"ContainerStarted","Data":"8e15e535d87541a1fee200e927833f4682a63513beebd31181196fe0a8ef721d"} Dec 02 15:20:26 crc kubenswrapper[4900]: I1202 15:20:26.910260 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:20:26 crc kubenswrapper[4900]: E1202 15:20:26.911190 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:20:27 crc kubenswrapper[4900]: I1202 15:20:27.039682 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6fxsq"] Dec 02 15:20:27 crc kubenswrapper[4900]: I1202 15:20:27.049763 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6fxsq"] Dec 02 15:20:28 crc kubenswrapper[4900]: I1202 15:20:28.930022 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226107e9-7c04-4d39-b2c2-78e6e5cfd695" path="/var/lib/kubelet/pods/226107e9-7c04-4d39-b2c2-78e6e5cfd695/volumes" Dec 02 15:20:31 crc kubenswrapper[4900]: I1202 15:20:31.454600 4900 generic.go:334] "Generic (PLEG): container finished" podID="71c00568-6d73-4684-ba1e-010757ff1e63" containerID="ef1d9b14d4f1f2a639be463376f734db789613aac54d71d6c2cc34d7b61ec7c1" exitCode=0 Dec 02 15:20:31 crc kubenswrapper[4900]: I1202 15:20:31.454738 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-757f5bc974-vgzhx" event={"ID":"71c00568-6d73-4684-ba1e-010757ff1e63","Type":"ContainerDied","Data":"ef1d9b14d4f1f2a639be463376f734db789613aac54d71d6c2cc34d7b61ec7c1"} Dec 02 15:20:32 crc kubenswrapper[4900]: I1202 15:20:32.468918 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-757f5bc974-vgzhx" event={"ID":"71c00568-6d73-4684-ba1e-010757ff1e63","Type":"ContainerStarted","Data":"f63bb7e7d7a917dd44f3f3f043547583ee39d1076ced0431d3cebf52e147e53a"} Dec 02 15:20:32 crc kubenswrapper[4900]: I1202 15:20:32.469167 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-757f5bc974-vgzhx" event={"ID":"71c00568-6d73-4684-ba1e-010757ff1e63","Type":"ContainerStarted","Data":"d7cb6a21989ad4225b5e3815cdf9032c722593b5d2418fd0befce31876bfa82a"} Dec 02 15:20:32 crc kubenswrapper[4900]: I1202 15:20:32.469184 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:32 crc kubenswrapper[4900]: I1202 15:20:32.469196 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:32 crc kubenswrapper[4900]: I1202 15:20:32.495054 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-757f5bc974-vgzhx" podStartSLOduration=3.360814255 podStartE2EDuration="12.495032776s" podCreationTimestamp="2025-12-02 15:20:20 +0000 UTC" firstStartedPulling="2025-12-02 15:20:21.880695307 +0000 UTC m=+5867.296509178" lastFinishedPulling="2025-12-02 15:20:31.014913848 +0000 UTC m=+5876.430727699" observedRunningTime="2025-12-02 15:20:32.492310369 +0000 UTC m=+5877.908124230" watchObservedRunningTime="2025-12-02 15:20:32.495032776 +0000 UTC m=+5877.910846627" Dec 02 15:20:37 crc kubenswrapper[4900]: I1202 15:20:37.909914 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:20:37 crc kubenswrapper[4900]: E1202 15:20:37.910693 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.443885 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-glzwh" podUID="07f3aa4d-40c4-45df-a374-1e2e908f7e3b" containerName="ovn-controller" probeResult="failure" output=< Dec 02 15:20:40 crc kubenswrapper[4900]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 15:20:40 crc kubenswrapper[4900]: > Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.492025 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.561367 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.562700 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2vsfp" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.682121 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-pbq7n"] Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.684401 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.689590 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.690766 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.691068 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-glzwh-config-8fwz8"] Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.691068 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.692875 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.698714 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.700806 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-pbq7n"] Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.759891 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-glzwh-config-8fwz8"] Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.854474 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-757f5bc974-vgzhx" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888358 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6w94\" (UniqueName: \"kubernetes.io/projected/ab368194-6815-412b-b1a6-03b76716dd2b-kube-api-access-x6w94\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888427 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888471 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-scripts\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888495 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-hm-ports\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888520 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-config-data-merged\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888542 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-additional-scripts\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888570 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-config-data\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888598 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run-ovn\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888614 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-scripts\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.888716 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-log-ovn\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.991531 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-scripts\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.992062 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-hm-ports\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.992293 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-config-data-merged\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.992502 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-additional-scripts\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.992748 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-config-data\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.992935 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-hm-ports\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.992790 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-config-data-merged\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.993171 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-additional-scripts\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.993912 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run-ovn\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.994084 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-scripts\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.994802 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-log-ovn\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.995138 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6w94\" (UniqueName: \"kubernetes.io/projected/ab368194-6815-412b-b1a6-03b76716dd2b-kube-api-access-x6w94\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.995348 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.996316 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.996524 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run-ovn\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.996747 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-log-ovn\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:40 crc kubenswrapper[4900]: I1202 15:20:40.997674 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-scripts\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.002107 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-config-data\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.002438 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348-scripts\") pod \"octavia-rsyslog-pbq7n\" (UID: \"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348\") " pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.013569 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6w94\" (UniqueName: \"kubernetes.io/projected/ab368194-6815-412b-b1a6-03b76716dd2b-kube-api-access-x6w94\") pod \"ovn-controller-glzwh-config-8fwz8\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.046017 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.064523 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.449312 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-c9w7m"] Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.451594 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.453837 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.474685 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-c9w7m"] Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.507862 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-amphora-image\") pod \"octavia-image-upload-59f8cff499-c9w7m\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.507948 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-httpd-config\") pod \"octavia-image-upload-59f8cff499-c9w7m\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.611845 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-amphora-image\") pod \"octavia-image-upload-59f8cff499-c9w7m\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.611902 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-httpd-config\") pod \"octavia-image-upload-59f8cff499-c9w7m\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.612781 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-amphora-image\") pod \"octavia-image-upload-59f8cff499-c9w7m\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.623558 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-httpd-config\") pod \"octavia-image-upload-59f8cff499-c9w7m\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.756400 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-pbq7n"] Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.778712 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-glzwh-config-8fwz8"] Dec 02 15:20:41 crc kubenswrapper[4900]: W1202 15:20:41.780441 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab368194_6815_412b_b1a6_03b76716dd2b.slice/crio-e2337249f4fce891cf74dd6300c1b75e227afacf82893b60f284400bb66ac698 WatchSource:0}: Error finding container e2337249f4fce891cf74dd6300c1b75e227afacf82893b60f284400bb66ac698: Status 404 returned error can't find the container with id e2337249f4fce891cf74dd6300c1b75e227afacf82893b60f284400bb66ac698 Dec 02 15:20:41 crc kubenswrapper[4900]: I1202 15:20:41.792472 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:20:42 crc kubenswrapper[4900]: W1202 15:20:42.295942 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd1902d_2c7f_4c26_9b89_1bd1cb234e87.slice/crio-a72537e8679194a231179946e38153ece4e170ef3e7b25256bf2427c7e5a0f80 WatchSource:0}: Error finding container a72537e8679194a231179946e38153ece4e170ef3e7b25256bf2427c7e5a0f80: Status 404 returned error can't find the container with id a72537e8679194a231179946e38153ece4e170ef3e7b25256bf2427c7e5a0f80 Dec 02 15:20:42 crc kubenswrapper[4900]: I1202 15:20:42.308207 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-c9w7m"] Dec 02 15:20:42 crc kubenswrapper[4900]: I1202 15:20:42.577276 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-8fwz8" event={"ID":"ab368194-6815-412b-b1a6-03b76716dd2b","Type":"ContainerStarted","Data":"b08b8423171fdebb7fc3bd335bb2f46dd16200df8f085cc0cf605430f8c3b34b"} Dec 02 15:20:42 crc kubenswrapper[4900]: I1202 15:20:42.577756 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-8fwz8" event={"ID":"ab368194-6815-412b-b1a6-03b76716dd2b","Type":"ContainerStarted","Data":"e2337249f4fce891cf74dd6300c1b75e227afacf82893b60f284400bb66ac698"} Dec 02 15:20:42 crc kubenswrapper[4900]: I1202 15:20:42.580596 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" event={"ID":"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87","Type":"ContainerStarted","Data":"a72537e8679194a231179946e38153ece4e170ef3e7b25256bf2427c7e5a0f80"} Dec 02 15:20:42 crc kubenswrapper[4900]: I1202 15:20:42.582849 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbq7n" event={"ID":"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348","Type":"ContainerStarted","Data":"a07c968e83deb46f482b9cadbbc2667856c21aa730eaa76d251405de2a292e43"} Dec 02 15:20:42 crc kubenswrapper[4900]: I1202 15:20:42.601169 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-glzwh-config-8fwz8" podStartSLOduration=2.601148392 podStartE2EDuration="2.601148392s" podCreationTimestamp="2025-12-02 15:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:20:42.597028217 +0000 UTC m=+5888.012842068" watchObservedRunningTime="2025-12-02 15:20:42.601148392 +0000 UTC m=+5888.016962243" Dec 02 15:20:43 crc kubenswrapper[4900]: I1202 15:20:43.607235 4900 generic.go:334] "Generic (PLEG): container finished" podID="ab368194-6815-412b-b1a6-03b76716dd2b" containerID="b08b8423171fdebb7fc3bd335bb2f46dd16200df8f085cc0cf605430f8c3b34b" exitCode=0 Dec 02 15:20:43 crc kubenswrapper[4900]: I1202 15:20:43.607325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-8fwz8" event={"ID":"ab368194-6815-412b-b1a6-03b76716dd2b","Type":"ContainerDied","Data":"b08b8423171fdebb7fc3bd335bb2f46dd16200df8f085cc0cf605430f8c3b34b"} Dec 02 15:20:44 crc kubenswrapper[4900]: I1202 15:20:44.627847 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbq7n" event={"ID":"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348","Type":"ContainerStarted","Data":"dd207c875985dac6c2ea6d402a2ceff664b705156cea36faf46aa3170e9feca2"} Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.310142 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.395420 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-additional-scripts\") pod \"ab368194-6815-412b-b1a6-03b76716dd2b\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.395479 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-log-ovn\") pod \"ab368194-6815-412b-b1a6-03b76716dd2b\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.395530 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-scripts\") pod \"ab368194-6815-412b-b1a6-03b76716dd2b\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.395550 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run\") pod \"ab368194-6815-412b-b1a6-03b76716dd2b\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.395633 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run-ovn\") pod \"ab368194-6815-412b-b1a6-03b76716dd2b\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.395678 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6w94\" (UniqueName: \"kubernetes.io/projected/ab368194-6815-412b-b1a6-03b76716dd2b-kube-api-access-x6w94\") pod \"ab368194-6815-412b-b1a6-03b76716dd2b\" (UID: \"ab368194-6815-412b-b1a6-03b76716dd2b\") " Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.397272 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run" (OuterVolumeSpecName: "var-run") pod "ab368194-6815-412b-b1a6-03b76716dd2b" (UID: "ab368194-6815-412b-b1a6-03b76716dd2b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.397313 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ab368194-6815-412b-b1a6-03b76716dd2b" (UID: "ab368194-6815-412b-b1a6-03b76716dd2b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.397336 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ab368194-6815-412b-b1a6-03b76716dd2b" (UID: "ab368194-6815-412b-b1a6-03b76716dd2b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.397928 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ab368194-6815-412b-b1a6-03b76716dd2b" (UID: "ab368194-6815-412b-b1a6-03b76716dd2b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.398016 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-scripts" (OuterVolumeSpecName: "scripts") pod "ab368194-6815-412b-b1a6-03b76716dd2b" (UID: "ab368194-6815-412b-b1a6-03b76716dd2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.401996 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab368194-6815-412b-b1a6-03b76716dd2b-kube-api-access-x6w94" (OuterVolumeSpecName: "kube-api-access-x6w94") pod "ab368194-6815-412b-b1a6-03b76716dd2b" (UID: "ab368194-6815-412b-b1a6-03b76716dd2b"). InnerVolumeSpecName "kube-api-access-x6w94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.432730 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-glzwh" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.497180 4900 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.497209 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6w94\" (UniqueName: \"kubernetes.io/projected/ab368194-6815-412b-b1a6-03b76716dd2b-kube-api-access-x6w94\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.497220 4900 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.497228 4900 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.497237 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab368194-6815-412b-b1a6-03b76716dd2b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.497244 4900 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab368194-6815-412b-b1a6-03b76716dd2b-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.647405 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-8fwz8" Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.647793 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-8fwz8" event={"ID":"ab368194-6815-412b-b1a6-03b76716dd2b","Type":"ContainerDied","Data":"e2337249f4fce891cf74dd6300c1b75e227afacf82893b60f284400bb66ac698"} Dec 02 15:20:45 crc kubenswrapper[4900]: I1202 15:20:45.647870 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2337249f4fce891cf74dd6300c1b75e227afacf82893b60f284400bb66ac698" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.107194 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-jdx4z"] Dec 02 15:20:46 crc kubenswrapper[4900]: E1202 15:20:46.107676 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab368194-6815-412b-b1a6-03b76716dd2b" containerName="ovn-config" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.107696 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab368194-6815-412b-b1a6-03b76716dd2b" containerName="ovn-config" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.107898 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab368194-6815-412b-b1a6-03b76716dd2b" containerName="ovn-config" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.108890 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.111817 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.111921 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.111817 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.116773 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-jdx4z"] Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.217186 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-combined-ca-bundle\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.217233 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-config-data\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.217313 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-scripts\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.217383 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/53ade2fa-2048-4a3b-9035-c981bb812173-hm-ports\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.217458 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53ade2fa-2048-4a3b-9035-c981bb812173-config-data-merged\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.217537 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-amphora-certs\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319172 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53ade2fa-2048-4a3b-9035-c981bb812173-config-data-merged\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-amphora-certs\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319429 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-combined-ca-bundle\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319479 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-config-data\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319551 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-scripts\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319668 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53ade2fa-2048-4a3b-9035-c981bb812173-config-data-merged\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.319634 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/53ade2fa-2048-4a3b-9035-c981bb812173-hm-ports\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.321426 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/53ade2fa-2048-4a3b-9035-c981bb812173-hm-ports\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.324861 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-combined-ca-bundle\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.324923 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-config-data\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.326097 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-amphora-certs\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.326546 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ade2fa-2048-4a3b-9035-c981bb812173-scripts\") pod \"octavia-healthmanager-jdx4z\" (UID: \"53ade2fa-2048-4a3b-9035-c981bb812173\") " pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.404505 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-glzwh-config-8fwz8"] Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.415521 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-glzwh-config-8fwz8"] Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.425493 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.511501 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-glzwh-config-l84pm"] Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.520793 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.525391 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.532146 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-glzwh-config-l84pm"] Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.628797 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-additional-scripts\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.628849 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qtr\" (UniqueName: \"kubernetes.io/projected/35926d85-435c-4627-ad8b-bfce8d91248e-kube-api-access-w2qtr\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.628896 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-scripts\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.628917 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.628964 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run-ovn\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.628987 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-log-ovn\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.659832 4900 generic.go:334] "Generic (PLEG): container finished" podID="d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348" containerID="dd207c875985dac6c2ea6d402a2ceff664b705156cea36faf46aa3170e9feca2" exitCode=0 Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.659870 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbq7n" event={"ID":"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348","Type":"ContainerDied","Data":"dd207c875985dac6c2ea6d402a2ceff664b705156cea36faf46aa3170e9feca2"} Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.731179 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-additional-scripts\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.731555 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2qtr\" (UniqueName: \"kubernetes.io/projected/35926d85-435c-4627-ad8b-bfce8d91248e-kube-api-access-w2qtr\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.731617 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-scripts\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.731661 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.731727 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run-ovn\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.731759 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-log-ovn\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.732041 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run-ovn\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.732110 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-log-ovn\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.732195 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.732382 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-additional-scripts\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.735896 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-scripts\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.753441 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2qtr\" (UniqueName: \"kubernetes.io/projected/35926d85-435c-4627-ad8b-bfce8d91248e-kube-api-access-w2qtr\") pod \"ovn-controller-glzwh-config-l84pm\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.928577 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab368194-6815-412b-b1a6-03b76716dd2b" path="/var/lib/kubelet/pods/ab368194-6815-412b-b1a6-03b76716dd2b/volumes" Dec 02 15:20:46 crc kubenswrapper[4900]: I1202 15:20:46.934837 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:47 crc kubenswrapper[4900]: I1202 15:20:47.053636 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-jdx4z"] Dec 02 15:20:47 crc kubenswrapper[4900]: I1202 15:20:47.462413 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-glzwh-config-l84pm"] Dec 02 15:20:47 crc kubenswrapper[4900]: I1202 15:20:47.682486 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-l84pm" event={"ID":"35926d85-435c-4627-ad8b-bfce8d91248e","Type":"ContainerStarted","Data":"f9207ea7728c4edc3d7be7a7990057dc687cc4ce93d9569dfce73f5b6de24d5f"} Dec 02 15:20:47 crc kubenswrapper[4900]: I1202 15:20:47.684798 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdx4z" event={"ID":"53ade2fa-2048-4a3b-9035-c981bb812173","Type":"ContainerStarted","Data":"4ced0bc4652665b6cda787da90edd4a3ad0a5276b68c5dde08b74b68d4d93eef"} Dec 02 15:20:48 crc kubenswrapper[4900]: E1202 15:20:48.353527 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35926d85_435c_4627_ad8b_bfce8d91248e.slice/crio-conmon-9645877a1bf80624d27cc510d1d383b6d9542a151c58ab6da2e086cdb6586430.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.703553 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdx4z" event={"ID":"53ade2fa-2048-4a3b-9035-c981bb812173","Type":"ContainerStarted","Data":"dcc793c9c35bea0f22ab912aae80cf2a4e2e703388ebbd907677d7ca7ac30db0"} Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.705727 4900 generic.go:334] "Generic (PLEG): container finished" podID="35926d85-435c-4627-ad8b-bfce8d91248e" containerID="9645877a1bf80624d27cc510d1d383b6d9542a151c58ab6da2e086cdb6586430" exitCode=0 Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.705756 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-l84pm" event={"ID":"35926d85-435c-4627-ad8b-bfce8d91248e","Type":"ContainerDied","Data":"9645877a1bf80624d27cc510d1d383b6d9542a151c58ab6da2e086cdb6586430"} Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.867617 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-5bzxq"] Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.869722 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.876900 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-5bzxq"] Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.920318 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.920793 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.925859 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-config-data\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.930573 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-amphora-certs\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.930870 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-combined-ca-bundle\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.930919 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/820aa17f-6436-4bc1-a178-acdd1488fb13-config-data-merged\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.930955 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-scripts\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:48 crc kubenswrapper[4900]: I1202 15:20:48.930992 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/820aa17f-6436-4bc1-a178-acdd1488fb13-hm-ports\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.032184 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/820aa17f-6436-4bc1-a178-acdd1488fb13-hm-ports\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.032454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-config-data\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.032625 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-amphora-certs\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.032896 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-combined-ca-bundle\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.033025 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/820aa17f-6436-4bc1-a178-acdd1488fb13-config-data-merged\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.033143 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-scripts\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.037043 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/820aa17f-6436-4bc1-a178-acdd1488fb13-config-data-merged\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.037067 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/820aa17f-6436-4bc1-a178-acdd1488fb13-hm-ports\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.037726 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-config-data\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.037882 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-combined-ca-bundle\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.041560 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-amphora-certs\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.045488 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820aa17f-6436-4bc1-a178-acdd1488fb13-scripts\") pod \"octavia-housekeeping-5bzxq\" (UID: \"820aa17f-6436-4bc1-a178-acdd1488fb13\") " pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.251811 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.725058 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-pbq7n" event={"ID":"d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348","Type":"ContainerStarted","Data":"6cf0a83ce08bda1a111991e756ee511d08e010e36f02820bc4e07c3745adf532"} Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.725504 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.772389 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-pbq7n" podStartSLOduration=2.8602628450000003 podStartE2EDuration="9.772371335s" podCreationTimestamp="2025-12-02 15:20:40 +0000 UTC" firstStartedPulling="2025-12-02 15:20:41.769851743 +0000 UTC m=+5887.185665594" lastFinishedPulling="2025-12-02 15:20:48.681960223 +0000 UTC m=+5894.097774084" observedRunningTime="2025-12-02 15:20:49.748400772 +0000 UTC m=+5895.164214633" watchObservedRunningTime="2025-12-02 15:20:49.772371335 +0000 UTC m=+5895.188185186" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.827880 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-kgltv"] Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.829666 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.832503 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.849542 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-scripts\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.849593 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data-merged\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.849612 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-combined-ca-bundle\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.849728 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.856170 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-kgltv"] Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.952707 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-scripts\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.952773 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data-merged\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.952801 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-combined-ca-bundle\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.952839 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.953499 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data-merged\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.961612 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.961980 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-combined-ca-bundle\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:49 crc kubenswrapper[4900]: I1202 15:20:49.976488 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-scripts\") pod \"octavia-db-sync-kgltv\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:50 crc kubenswrapper[4900]: I1202 15:20:50.138113 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-5bzxq"] Dec 02 15:20:50 crc kubenswrapper[4900]: I1202 15:20:50.166538 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-kgltv" Dec 02 15:20:50 crc kubenswrapper[4900]: I1202 15:20:50.736075 4900 generic.go:334] "Generic (PLEG): container finished" podID="53ade2fa-2048-4a3b-9035-c981bb812173" containerID="dcc793c9c35bea0f22ab912aae80cf2a4e2e703388ebbd907677d7ca7ac30db0" exitCode=0 Dec 02 15:20:50 crc kubenswrapper[4900]: I1202 15:20:50.736131 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdx4z" event={"ID":"53ade2fa-2048-4a3b-9035-c981bb812173","Type":"ContainerDied","Data":"dcc793c9c35bea0f22ab912aae80cf2a4e2e703388ebbd907677d7ca7ac30db0"} Dec 02 15:20:50 crc kubenswrapper[4900]: I1202 15:20:50.910922 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:20:50 crc kubenswrapper[4900]: E1202 15:20:50.911373 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.455609 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-bkczn"] Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.458727 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.462509 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.467776 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.469209 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-bkczn"] Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.527884 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-hm-ports\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.528149 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-amphora-certs\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.528187 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-config-data\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.528237 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-combined-ca-bundle\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.528357 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-config-data-merged\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.528392 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-scripts\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.629826 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-config-data-merged\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.629883 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-scripts\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.629914 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-hm-ports\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.629931 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-amphora-certs\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.629963 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-config-data\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.630011 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-combined-ca-bundle\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.632627 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-hm-ports\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.632920 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-config-data-merged\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.642580 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-amphora-certs\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.646264 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-combined-ca-bundle\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.653732 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-scripts\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.654389 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a53b311e-3f6b-48aa-b306-72f3e26c4ce9-config-data\") pod \"octavia-worker-bkczn\" (UID: \"a53b311e-3f6b-48aa-b306-72f3e26c4ce9\") " pod="openstack/octavia-worker-bkczn" Dec 02 15:20:52 crc kubenswrapper[4900]: I1202 15:20:52.783137 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-bkczn" Dec 02 15:20:54 crc kubenswrapper[4900]: W1202 15:20:54.693879 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod820aa17f_6436_4bc1_a178_acdd1488fb13.slice/crio-83cf1e3d79f88c6ad68712072107b4c239a64eed0531eab4b6a5ad1737923ce2 WatchSource:0}: Error finding container 83cf1e3d79f88c6ad68712072107b4c239a64eed0531eab4b6a5ad1737923ce2: Status 404 returned error can't find the container with id 83cf1e3d79f88c6ad68712072107b4c239a64eed0531eab4b6a5ad1737923ce2 Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.774538 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.782672 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-5bzxq" event={"ID":"820aa17f-6436-4bc1-a178-acdd1488fb13","Type":"ContainerStarted","Data":"83cf1e3d79f88c6ad68712072107b4c239a64eed0531eab4b6a5ad1737923ce2"} Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.784268 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-glzwh-config-l84pm" event={"ID":"35926d85-435c-4627-ad8b-bfce8d91248e","Type":"ContainerDied","Data":"f9207ea7728c4edc3d7be7a7990057dc687cc4ce93d9569dfce73f5b6de24d5f"} Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.784316 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9207ea7728c4edc3d7be7a7990057dc687cc4ce93d9569dfce73f5b6de24d5f" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.784335 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-glzwh-config-l84pm" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904092 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-additional-scripts\") pod \"35926d85-435c-4627-ad8b-bfce8d91248e\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904159 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run-ovn\") pod \"35926d85-435c-4627-ad8b-bfce8d91248e\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904255 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-scripts\") pod \"35926d85-435c-4627-ad8b-bfce8d91248e\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904290 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "35926d85-435c-4627-ad8b-bfce8d91248e" (UID: "35926d85-435c-4627-ad8b-bfce8d91248e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904301 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2qtr\" (UniqueName: \"kubernetes.io/projected/35926d85-435c-4627-ad8b-bfce8d91248e-kube-api-access-w2qtr\") pod \"35926d85-435c-4627-ad8b-bfce8d91248e\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904325 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-log-ovn\") pod \"35926d85-435c-4627-ad8b-bfce8d91248e\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904350 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run\") pod \"35926d85-435c-4627-ad8b-bfce8d91248e\" (UID: \"35926d85-435c-4627-ad8b-bfce8d91248e\") " Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904425 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "35926d85-435c-4627-ad8b-bfce8d91248e" (UID: "35926d85-435c-4627-ad8b-bfce8d91248e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904544 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run" (OuterVolumeSpecName: "var-run") pod "35926d85-435c-4627-ad8b-bfce8d91248e" (UID: "35926d85-435c-4627-ad8b-bfce8d91248e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904901 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "35926d85-435c-4627-ad8b-bfce8d91248e" (UID: "35926d85-435c-4627-ad8b-bfce8d91248e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.904986 4900 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.905010 4900 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.905019 4900 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35926d85-435c-4627-ad8b-bfce8d91248e-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.905185 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-scripts" (OuterVolumeSpecName: "scripts") pod "35926d85-435c-4627-ad8b-bfce8d91248e" (UID: "35926d85-435c-4627-ad8b-bfce8d91248e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:20:54 crc kubenswrapper[4900]: I1202 15:20:54.909699 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35926d85-435c-4627-ad8b-bfce8d91248e-kube-api-access-w2qtr" (OuterVolumeSpecName: "kube-api-access-w2qtr") pod "35926d85-435c-4627-ad8b-bfce8d91248e" (UID: "35926d85-435c-4627-ad8b-bfce8d91248e"). InnerVolumeSpecName "kube-api-access-w2qtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.006389 4900 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.006428 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35926d85-435c-4627-ad8b-bfce8d91248e-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.006437 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2qtr\" (UniqueName: \"kubernetes.io/projected/35926d85-435c-4627-ad8b-bfce8d91248e-kube-api-access-w2qtr\") on node \"crc\" DevicePath \"\"" Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.447727 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-kgltv"] Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.575882 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-bkczn"] Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.793577 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bkczn" event={"ID":"a53b311e-3f6b-48aa-b306-72f3e26c4ce9","Type":"ContainerStarted","Data":"3ec317169ea7125d388068d1f0f0bdfb757c265e5b52bd7917ce1900fddf0d5e"} Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.796185 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" event={"ID":"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87","Type":"ContainerStarted","Data":"3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0"} Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.803707 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdx4z" event={"ID":"53ade2fa-2048-4a3b-9035-c981bb812173","Type":"ContainerStarted","Data":"de8c1de49d713faff81b82975a9736712a9e36a0279f9f93283379c6e8495115"} Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.804684 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.819562 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-kgltv" event={"ID":"0779c245-beae-4f64-a9d5-c4ad61d6c1e4","Type":"ContainerStarted","Data":"6318ce7888e8ccaf65eb158f0af633ffd162116b06fdd08c1e60eda48723f3de"} Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.819617 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-kgltv" event={"ID":"0779c245-beae-4f64-a9d5-c4ad61d6c1e4","Type":"ContainerStarted","Data":"767ae742302315508d72cf15c6d34e3724a0378f1ee77e0dc1730d321b879b4f"} Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.852411 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-jdx4z" podStartSLOduration=9.852387443 podStartE2EDuration="9.852387443s" podCreationTimestamp="2025-12-02 15:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:20:55.840178481 +0000 UTC m=+5901.255992332" watchObservedRunningTime="2025-12-02 15:20:55.852387443 +0000 UTC m=+5901.268201294" Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.871163 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-glzwh-config-l84pm"] Dec 02 15:20:55 crc kubenswrapper[4900]: I1202 15:20:55.889522 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-glzwh-config-l84pm"] Dec 02 15:20:56 crc kubenswrapper[4900]: I1202 15:20:56.092326 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-pbq7n" Dec 02 15:20:56 crc kubenswrapper[4900]: I1202 15:20:56.831379 4900 generic.go:334] "Generic (PLEG): container finished" podID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerID="6318ce7888e8ccaf65eb158f0af633ffd162116b06fdd08c1e60eda48723f3de" exitCode=0 Dec 02 15:20:56 crc kubenswrapper[4900]: I1202 15:20:56.831476 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-kgltv" event={"ID":"0779c245-beae-4f64-a9d5-c4ad61d6c1e4","Type":"ContainerDied","Data":"6318ce7888e8ccaf65eb158f0af633ffd162116b06fdd08c1e60eda48723f3de"} Dec 02 15:20:56 crc kubenswrapper[4900]: I1202 15:20:56.833732 4900 generic.go:334] "Generic (PLEG): container finished" podID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerID="3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0" exitCode=0 Dec 02 15:20:56 crc kubenswrapper[4900]: I1202 15:20:56.834768 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" event={"ID":"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87","Type":"ContainerDied","Data":"3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0"} Dec 02 15:20:56 crc kubenswrapper[4900]: I1202 15:20:56.923735 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35926d85-435c-4627-ad8b-bfce8d91248e" path="/var/lib/kubelet/pods/35926d85-435c-4627-ad8b-bfce8d91248e/volumes" Dec 02 15:20:57 crc kubenswrapper[4900]: I1202 15:20:57.846396 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-kgltv" event={"ID":"0779c245-beae-4f64-a9d5-c4ad61d6c1e4","Type":"ContainerStarted","Data":"b523f6e3b7077875d013004d2dc4e50898da8cf9da2a06fe1e615a36826a5b3e"} Dec 02 15:20:57 crc kubenswrapper[4900]: I1202 15:20:57.873880 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-kgltv" podStartSLOduration=8.873854294000001 podStartE2EDuration="8.873854294s" podCreationTimestamp="2025-12-02 15:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:20:57.868182325 +0000 UTC m=+5903.283996176" watchObservedRunningTime="2025-12-02 15:20:57.873854294 +0000 UTC m=+5903.289668145" Dec 02 15:20:58 crc kubenswrapper[4900]: I1202 15:20:58.856583 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-5bzxq" event={"ID":"820aa17f-6436-4bc1-a178-acdd1488fb13","Type":"ContainerStarted","Data":"d614c547e377437e2f813d896b85831b5087995b437e4d9a95373e5380a4f00a"} Dec 02 15:20:59 crc kubenswrapper[4900]: I1202 15:20:59.870144 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bkczn" event={"ID":"a53b311e-3f6b-48aa-b306-72f3e26c4ce9","Type":"ContainerStarted","Data":"bb23edcde6827c48679afefa41506365209bf4db735fc4a669752923589b2710"} Dec 02 15:20:59 crc kubenswrapper[4900]: I1202 15:20:59.873628 4900 generic.go:334] "Generic (PLEG): container finished" podID="820aa17f-6436-4bc1-a178-acdd1488fb13" containerID="d614c547e377437e2f813d896b85831b5087995b437e4d9a95373e5380a4f00a" exitCode=0 Dec 02 15:20:59 crc kubenswrapper[4900]: I1202 15:20:59.874128 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-5bzxq" event={"ID":"820aa17f-6436-4bc1-a178-acdd1488fb13","Type":"ContainerDied","Data":"d614c547e377437e2f813d896b85831b5087995b437e4d9a95373e5380a4f00a"} Dec 02 15:20:59 crc kubenswrapper[4900]: I1202 15:20:59.876014 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" event={"ID":"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87","Type":"ContainerStarted","Data":"6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a"} Dec 02 15:20:59 crc kubenswrapper[4900]: I1202 15:20:59.929505 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" podStartSLOduration=2.261072522 podStartE2EDuration="18.929490623s" podCreationTimestamp="2025-12-02 15:20:41 +0000 UTC" firstStartedPulling="2025-12-02 15:20:42.299078025 +0000 UTC m=+5887.714891896" lastFinishedPulling="2025-12-02 15:20:58.967496136 +0000 UTC m=+5904.383309997" observedRunningTime="2025-12-02 15:20:59.926385306 +0000 UTC m=+5905.342199157" watchObservedRunningTime="2025-12-02 15:20:59.929490623 +0000 UTC m=+5905.345304474" Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.893144 4900 generic.go:334] "Generic (PLEG): container finished" podID="a53b311e-3f6b-48aa-b306-72f3e26c4ce9" containerID="bb23edcde6827c48679afefa41506365209bf4db735fc4a669752923589b2710" exitCode=0 Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.893354 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bkczn" event={"ID":"a53b311e-3f6b-48aa-b306-72f3e26c4ce9","Type":"ContainerDied","Data":"bb23edcde6827c48679afefa41506365209bf4db735fc4a669752923589b2710"} Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.900953 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-5bzxq" event={"ID":"820aa17f-6436-4bc1-a178-acdd1488fb13","Type":"ContainerStarted","Data":"0680f4104ff57eb3d2d6ec540471b266259e2b969c8d9728edc5ad19731f47d8"} Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.901376 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.910079 4900 generic.go:334] "Generic (PLEG): container finished" podID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerID="b523f6e3b7077875d013004d2dc4e50898da8cf9da2a06fe1e615a36826a5b3e" exitCode=0 Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.945306 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-kgltv" event={"ID":"0779c245-beae-4f64-a9d5-c4ad61d6c1e4","Type":"ContainerDied","Data":"b523f6e3b7077875d013004d2dc4e50898da8cf9da2a06fe1e615a36826a5b3e"} Dec 02 15:21:00 crc kubenswrapper[4900]: I1202 15:21:00.965855 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-5bzxq" podStartSLOduration=10.575708331 podStartE2EDuration="12.965833737s" podCreationTimestamp="2025-12-02 15:20:48 +0000 UTC" firstStartedPulling="2025-12-02 15:20:54.699600782 +0000 UTC m=+5900.115414633" lastFinishedPulling="2025-12-02 15:20:57.089726168 +0000 UTC m=+5902.505540039" observedRunningTime="2025-12-02 15:21:00.96131394 +0000 UTC m=+5906.377127791" watchObservedRunningTime="2025-12-02 15:21:00.965833737 +0000 UTC m=+5906.381647588" Dec 02 15:21:01 crc kubenswrapper[4900]: I1202 15:21:01.466068 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-jdx4z" Dec 02 15:21:01 crc kubenswrapper[4900]: I1202 15:21:01.926722 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-bkczn" event={"ID":"a53b311e-3f6b-48aa-b306-72f3e26c4ce9","Type":"ContainerStarted","Data":"e16cea78da475cb97dc4f8b709493b9ea1f4e20aa80e1a3f676d4116358ecd99"} Dec 02 15:21:01 crc kubenswrapper[4900]: I1202 15:21:01.926867 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-bkczn" Dec 02 15:21:01 crc kubenswrapper[4900]: I1202 15:21:01.958457 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-bkczn" podStartSLOduration=6.6164462539999995 podStartE2EDuration="9.958435384s" podCreationTimestamp="2025-12-02 15:20:52 +0000 UTC" firstStartedPulling="2025-12-02 15:20:55.601575375 +0000 UTC m=+5901.017389226" lastFinishedPulling="2025-12-02 15:20:58.943564495 +0000 UTC m=+5904.359378356" observedRunningTime="2025-12-02 15:21:01.945220823 +0000 UTC m=+5907.361034674" watchObservedRunningTime="2025-12-02 15:21:01.958435384 +0000 UTC m=+5907.374249235" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.297394 4900 scope.go:117] "RemoveContainer" containerID="f364c8e3ce4a6c44648db53e260c9705179a3092500aaf16a9589ac6a262ae83" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.437188 4900 scope.go:117] "RemoveContainer" containerID="2af0e5dbefe67794b5dca649b1753ebb9dd2e43972e13e0d46193f0090b66bd7" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.444960 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-kgltv" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.477251 4900 scope.go:117] "RemoveContainer" containerID="402524039ce6b81615570a0ea96746b00a6a992df9b9cbc4cdfce35c89a6baf1" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.534487 4900 scope.go:117] "RemoveContainer" containerID="5413cd290408b0f577d82ecf1fd06b4d03efbac3b009f50226dae79039809983" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.560045 4900 scope.go:117] "RemoveContainer" containerID="848fb28230e049f9c653cb59ccc032c4836d04ec6a23f9081408df5909b61936" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.584210 4900 scope.go:117] "RemoveContainer" containerID="50fee37938d55f334708f478b50a61bad8b719fa4ee610ade06df784bd4b812f" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.601271 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-combined-ca-bundle\") pod \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.601616 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data-merged\") pod \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.601786 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data\") pod \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.602069 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-scripts\") pod \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\" (UID: \"0779c245-beae-4f64-a9d5-c4ad61d6c1e4\") " Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.607144 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-scripts" (OuterVolumeSpecName: "scripts") pod "0779c245-beae-4f64-a9d5-c4ad61d6c1e4" (UID: "0779c245-beae-4f64-a9d5-c4ad61d6c1e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.607978 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data" (OuterVolumeSpecName: "config-data") pod "0779c245-beae-4f64-a9d5-c4ad61d6c1e4" (UID: "0779c245-beae-4f64-a9d5-c4ad61d6c1e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.626083 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "0779c245-beae-4f64-a9d5-c4ad61d6c1e4" (UID: "0779c245-beae-4f64-a9d5-c4ad61d6c1e4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.627789 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0779c245-beae-4f64-a9d5-c4ad61d6c1e4" (UID: "0779c245-beae-4f64-a9d5-c4ad61d6c1e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.705588 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.705667 4900 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data-merged\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.705680 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.705693 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0779c245-beae-4f64-a9d5-c4ad61d6c1e4-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.936666 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-kgltv" event={"ID":"0779c245-beae-4f64-a9d5-c4ad61d6c1e4","Type":"ContainerDied","Data":"767ae742302315508d72cf15c6d34e3724a0378f1ee77e0dc1730d321b879b4f"} Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.936908 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="767ae742302315508d72cf15c6d34e3724a0378f1ee77e0dc1730d321b879b4f" Dec 02 15:21:02 crc kubenswrapper[4900]: I1202 15:21:02.936761 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-kgltv" Dec 02 15:21:03 crc kubenswrapper[4900]: I1202 15:21:03.911527 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:21:03 crc kubenswrapper[4900]: E1202 15:21:03.911965 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:21:07 crc kubenswrapper[4900]: I1202 15:21:07.821019 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-bkczn" Dec 02 15:21:17 crc kubenswrapper[4900]: I1202 15:21:17.911014 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:21:17 crc kubenswrapper[4900]: E1202 15:21:17.912516 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:21:19 crc kubenswrapper[4900]: I1202 15:21:19.290953 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-5bzxq" Dec 02 15:21:25 crc kubenswrapper[4900]: I1202 15:21:25.237603 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-c9w7m"] Dec 02 15:21:25 crc kubenswrapper[4900]: I1202 15:21:25.238374 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerName="octavia-amphora-httpd" containerID="cri-o://6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a" gracePeriod=30 Dec 02 15:21:25 crc kubenswrapper[4900]: I1202 15:21:25.876411 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.056593 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-httpd-config\") pod \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.056964 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-amphora-image\") pod \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\" (UID: \"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87\") " Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.117830 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" (UID: "7cd1902d-2c7f-4c26-9b89-1bd1cb234e87"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.162708 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.211299 4900 generic.go:334] "Generic (PLEG): container finished" podID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerID="6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a" exitCode=0 Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.211347 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" event={"ID":"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87","Type":"ContainerDied","Data":"6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a"} Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.211383 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" event={"ID":"7cd1902d-2c7f-4c26-9b89-1bd1cb234e87","Type":"ContainerDied","Data":"a72537e8679194a231179946e38153ece4e170ef3e7b25256bf2427c7e5a0f80"} Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.211401 4900 scope.go:117] "RemoveContainer" containerID="6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.211590 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-c9w7m" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.215141 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" (UID: "7cd1902d-2c7f-4c26-9b89-1bd1cb234e87"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.264280 4900 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87-amphora-image\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.278951 4900 scope.go:117] "RemoveContainer" containerID="3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.301837 4900 scope.go:117] "RemoveContainer" containerID="6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a" Dec 02 15:21:26 crc kubenswrapper[4900]: E1202 15:21:26.302332 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a\": container with ID starting with 6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a not found: ID does not exist" containerID="6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.302362 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a"} err="failed to get container status \"6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a\": rpc error: code = NotFound desc = could not find container \"6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a\": container with ID starting with 6a6517a45e3bbead830cf223094f35514d52160013ea966eb71b8c6198d71e9a not found: ID does not exist" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.302381 4900 scope.go:117] "RemoveContainer" containerID="3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0" Dec 02 15:21:26 crc kubenswrapper[4900]: E1202 15:21:26.302839 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0\": container with ID starting with 3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0 not found: ID does not exist" containerID="3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.302859 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0"} err="failed to get container status \"3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0\": rpc error: code = NotFound desc = could not find container \"3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0\": container with ID starting with 3c1b11c8d44196cdb7b7bf5367e155d11f3630278848a3b91b699db05990c1b0 not found: ID does not exist" Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.546403 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-c9w7m"] Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.555587 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-c9w7m"] Dec 02 15:21:26 crc kubenswrapper[4900]: I1202 15:21:26.923338 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" path="/var/lib/kubelet/pods/7cd1902d-2c7f-4c26-9b89-1bd1cb234e87/volumes" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.592839 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-svfh6"] Dec 02 15:21:29 crc kubenswrapper[4900]: E1202 15:21:29.593580 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerName="octavia-amphora-httpd" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593594 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerName="octavia-amphora-httpd" Dec 02 15:21:29 crc kubenswrapper[4900]: E1202 15:21:29.593611 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35926d85-435c-4627-ad8b-bfce8d91248e" containerName="ovn-config" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593617 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="35926d85-435c-4627-ad8b-bfce8d91248e" containerName="ovn-config" Dec 02 15:21:29 crc kubenswrapper[4900]: E1202 15:21:29.593631 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerName="octavia-db-sync" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593637 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerName="octavia-db-sync" Dec 02 15:21:29 crc kubenswrapper[4900]: E1202 15:21:29.593683 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerName="init" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593688 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerName="init" Dec 02 15:21:29 crc kubenswrapper[4900]: E1202 15:21:29.593709 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerName="init" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593714 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerName="init" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593882 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="35926d85-435c-4627-ad8b-bfce8d91248e" containerName="ovn-config" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593900 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd1902d-2c7f-4c26-9b89-1bd1cb234e87" containerName="octavia-amphora-httpd" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.593916 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" containerName="octavia-db-sync" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.595212 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.597889 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.610823 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-svfh6"] Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.726814 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/93db835a-7a0f-4e36-ab43-5696fa15fb07-amphora-image\") pod \"octavia-image-upload-59f8cff499-svfh6\" (UID: \"93db835a-7a0f-4e36-ab43-5696fa15fb07\") " pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.727721 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/93db835a-7a0f-4e36-ab43-5696fa15fb07-httpd-config\") pod \"octavia-image-upload-59f8cff499-svfh6\" (UID: \"93db835a-7a0f-4e36-ab43-5696fa15fb07\") " pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.829658 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/93db835a-7a0f-4e36-ab43-5696fa15fb07-httpd-config\") pod \"octavia-image-upload-59f8cff499-svfh6\" (UID: \"93db835a-7a0f-4e36-ab43-5696fa15fb07\") " pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.830164 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/93db835a-7a0f-4e36-ab43-5696fa15fb07-amphora-image\") pod \"octavia-image-upload-59f8cff499-svfh6\" (UID: \"93db835a-7a0f-4e36-ab43-5696fa15fb07\") " pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.830570 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/93db835a-7a0f-4e36-ab43-5696fa15fb07-amphora-image\") pod \"octavia-image-upload-59f8cff499-svfh6\" (UID: \"93db835a-7a0f-4e36-ab43-5696fa15fb07\") " pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.836702 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/93db835a-7a0f-4e36-ab43-5696fa15fb07-httpd-config\") pod \"octavia-image-upload-59f8cff499-svfh6\" (UID: \"93db835a-7a0f-4e36-ab43-5696fa15fb07\") " pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:29 crc kubenswrapper[4900]: I1202 15:21:29.922757 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-svfh6" Dec 02 15:21:30 crc kubenswrapper[4900]: I1202 15:21:30.390979 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-svfh6"] Dec 02 15:21:30 crc kubenswrapper[4900]: W1202 15:21:30.396050 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93db835a_7a0f_4e36_ab43_5696fa15fb07.slice/crio-1015a8c3adaa1225c143bbfd24fc05961fd9caa94e95de08e455c474b6f71922 WatchSource:0}: Error finding container 1015a8c3adaa1225c143bbfd24fc05961fd9caa94e95de08e455c474b6f71922: Status 404 returned error can't find the container with id 1015a8c3adaa1225c143bbfd24fc05961fd9caa94e95de08e455c474b6f71922 Dec 02 15:21:31 crc kubenswrapper[4900]: I1202 15:21:31.269733 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-svfh6" event={"ID":"93db835a-7a0f-4e36-ab43-5696fa15fb07","Type":"ContainerStarted","Data":"c502984b7d6263e35b4f703612566ecc5958ba11b7aae2289cc32ed775306449"} Dec 02 15:21:31 crc kubenswrapper[4900]: I1202 15:21:31.270163 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-svfh6" event={"ID":"93db835a-7a0f-4e36-ab43-5696fa15fb07","Type":"ContainerStarted","Data":"1015a8c3adaa1225c143bbfd24fc05961fd9caa94e95de08e455c474b6f71922"} Dec 02 15:21:31 crc kubenswrapper[4900]: I1202 15:21:31.910704 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:21:31 crc kubenswrapper[4900]: E1202 15:21:31.912130 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:21:32 crc kubenswrapper[4900]: I1202 15:21:32.282410 4900 generic.go:334] "Generic (PLEG): container finished" podID="93db835a-7a0f-4e36-ab43-5696fa15fb07" containerID="c502984b7d6263e35b4f703612566ecc5958ba11b7aae2289cc32ed775306449" exitCode=0 Dec 02 15:21:32 crc kubenswrapper[4900]: I1202 15:21:32.282497 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-svfh6" event={"ID":"93db835a-7a0f-4e36-ab43-5696fa15fb07","Type":"ContainerDied","Data":"c502984b7d6263e35b4f703612566ecc5958ba11b7aae2289cc32ed775306449"} Dec 02 15:21:35 crc kubenswrapper[4900]: I1202 15:21:35.313158 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-svfh6" event={"ID":"93db835a-7a0f-4e36-ab43-5696fa15fb07","Type":"ContainerStarted","Data":"63962bb0bbd097ce325d95676854503844d117227bfb3c54c24786745be3c060"} Dec 02 15:21:35 crc kubenswrapper[4900]: I1202 15:21:35.338454 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-svfh6" podStartSLOduration=1.92963607 podStartE2EDuration="6.338418827s" podCreationTimestamp="2025-12-02 15:21:29 +0000 UTC" firstStartedPulling="2025-12-02 15:21:30.40155466 +0000 UTC m=+5935.817368511" lastFinishedPulling="2025-12-02 15:21:34.810337407 +0000 UTC m=+5940.226151268" observedRunningTime="2025-12-02 15:21:35.325179515 +0000 UTC m=+5940.740993366" watchObservedRunningTime="2025-12-02 15:21:35.338418827 +0000 UTC m=+5940.754232708" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.368239 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcff5"] Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.371373 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.402772 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcff5"] Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.495497 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-catalog-content\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.495602 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-utilities\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.495708 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645vn\" (UniqueName: \"kubernetes.io/projected/48868fea-dd4b-449f-b079-40e16659c62a-kube-api-access-645vn\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.597550 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-catalog-content\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.597610 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-utilities\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.597667 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645vn\" (UniqueName: \"kubernetes.io/projected/48868fea-dd4b-449f-b079-40e16659c62a-kube-api-access-645vn\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.598284 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-catalog-content\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.598375 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-utilities\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.631299 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645vn\" (UniqueName: \"kubernetes.io/projected/48868fea-dd4b-449f-b079-40e16659c62a-kube-api-access-645vn\") pod \"redhat-marketplace-fcff5\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:41 crc kubenswrapper[4900]: I1202 15:21:41.766119 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:42 crc kubenswrapper[4900]: I1202 15:21:42.245337 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcff5"] Dec 02 15:21:42 crc kubenswrapper[4900]: W1202 15:21:42.263792 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48868fea_dd4b_449f_b079_40e16659c62a.slice/crio-52037b0a655fe5fd52a24ee81147615abd08ecd2f192543c8fc6aae18e4183e6 WatchSource:0}: Error finding container 52037b0a655fe5fd52a24ee81147615abd08ecd2f192543c8fc6aae18e4183e6: Status 404 returned error can't find the container with id 52037b0a655fe5fd52a24ee81147615abd08ecd2f192543c8fc6aae18e4183e6 Dec 02 15:21:42 crc kubenswrapper[4900]: I1202 15:21:42.397400 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcff5" event={"ID":"48868fea-dd4b-449f-b079-40e16659c62a","Type":"ContainerStarted","Data":"52037b0a655fe5fd52a24ee81147615abd08ecd2f192543c8fc6aae18e4183e6"} Dec 02 15:21:43 crc kubenswrapper[4900]: I1202 15:21:43.412376 4900 generic.go:334] "Generic (PLEG): container finished" podID="48868fea-dd4b-449f-b079-40e16659c62a" containerID="2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78" exitCode=0 Dec 02 15:21:43 crc kubenswrapper[4900]: I1202 15:21:43.412430 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcff5" event={"ID":"48868fea-dd4b-449f-b079-40e16659c62a","Type":"ContainerDied","Data":"2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78"} Dec 02 15:21:44 crc kubenswrapper[4900]: I1202 15:21:44.919891 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:21:44 crc kubenswrapper[4900]: E1202 15:21:44.921321 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:21:45 crc kubenswrapper[4900]: I1202 15:21:45.460889 4900 generic.go:334] "Generic (PLEG): container finished" podID="48868fea-dd4b-449f-b079-40e16659c62a" containerID="23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a" exitCode=0 Dec 02 15:21:45 crc kubenswrapper[4900]: I1202 15:21:45.460966 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcff5" event={"ID":"48868fea-dd4b-449f-b079-40e16659c62a","Type":"ContainerDied","Data":"23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a"} Dec 02 15:21:46 crc kubenswrapper[4900]: I1202 15:21:46.470577 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcff5" event={"ID":"48868fea-dd4b-449f-b079-40e16659c62a","Type":"ContainerStarted","Data":"75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f"} Dec 02 15:21:51 crc kubenswrapper[4900]: I1202 15:21:51.766336 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:51 crc kubenswrapper[4900]: I1202 15:21:51.767405 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:51 crc kubenswrapper[4900]: I1202 15:21:51.860822 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:51 crc kubenswrapper[4900]: I1202 15:21:51.890571 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcff5" podStartSLOduration=8.289170491 podStartE2EDuration="10.890546256s" podCreationTimestamp="2025-12-02 15:21:41 +0000 UTC" firstStartedPulling="2025-12-02 15:21:43.415696137 +0000 UTC m=+5948.831509998" lastFinishedPulling="2025-12-02 15:21:46.017071922 +0000 UTC m=+5951.432885763" observedRunningTime="2025-12-02 15:21:46.498866403 +0000 UTC m=+5951.914680254" watchObservedRunningTime="2025-12-02 15:21:51.890546256 +0000 UTC m=+5957.306360147" Dec 02 15:21:52 crc kubenswrapper[4900]: I1202 15:21:52.607407 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:52 crc kubenswrapper[4900]: I1202 15:21:52.678007 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcff5"] Dec 02 15:21:54 crc kubenswrapper[4900]: I1202 15:21:54.564558 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcff5" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="registry-server" containerID="cri-o://75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f" gracePeriod=2 Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.107438 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.178347 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-catalog-content\") pod \"48868fea-dd4b-449f-b079-40e16659c62a\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.178456 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-utilities\") pod \"48868fea-dd4b-449f-b079-40e16659c62a\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.178685 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-645vn\" (UniqueName: \"kubernetes.io/projected/48868fea-dd4b-449f-b079-40e16659c62a-kube-api-access-645vn\") pod \"48868fea-dd4b-449f-b079-40e16659c62a\" (UID: \"48868fea-dd4b-449f-b079-40e16659c62a\") " Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.179419 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-utilities" (OuterVolumeSpecName: "utilities") pod "48868fea-dd4b-449f-b079-40e16659c62a" (UID: "48868fea-dd4b-449f-b079-40e16659c62a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.199059 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48868fea-dd4b-449f-b079-40e16659c62a-kube-api-access-645vn" (OuterVolumeSpecName: "kube-api-access-645vn") pod "48868fea-dd4b-449f-b079-40e16659c62a" (UID: "48868fea-dd4b-449f-b079-40e16659c62a"). InnerVolumeSpecName "kube-api-access-645vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.215793 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48868fea-dd4b-449f-b079-40e16659c62a" (UID: "48868fea-dd4b-449f-b079-40e16659c62a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.280881 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-645vn\" (UniqueName: \"kubernetes.io/projected/48868fea-dd4b-449f-b079-40e16659c62a-kube-api-access-645vn\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.280912 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.280922 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48868fea-dd4b-449f-b079-40e16659c62a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.575777 4900 generic.go:334] "Generic (PLEG): container finished" podID="48868fea-dd4b-449f-b079-40e16659c62a" containerID="75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f" exitCode=0 Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.575889 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcff5" event={"ID":"48868fea-dd4b-449f-b079-40e16659c62a","Type":"ContainerDied","Data":"75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f"} Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.576095 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcff5" event={"ID":"48868fea-dd4b-449f-b079-40e16659c62a","Type":"ContainerDied","Data":"52037b0a655fe5fd52a24ee81147615abd08ecd2f192543c8fc6aae18e4183e6"} Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.576114 4900 scope.go:117] "RemoveContainer" containerID="75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.575932 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcff5" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.611472 4900 scope.go:117] "RemoveContainer" containerID="23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.613863 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcff5"] Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.623522 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcff5"] Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.646193 4900 scope.go:117] "RemoveContainer" containerID="2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.696763 4900 scope.go:117] "RemoveContainer" containerID="75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f" Dec 02 15:21:55 crc kubenswrapper[4900]: E1202 15:21:55.697322 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f\": container with ID starting with 75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f not found: ID does not exist" containerID="75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.697363 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f"} err="failed to get container status \"75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f\": rpc error: code = NotFound desc = could not find container \"75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f\": container with ID starting with 75b79ae7bc9e0c8205fdea79a6f9eb32d4c6768739547c7542110b03d987ca9f not found: ID does not exist" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.697391 4900 scope.go:117] "RemoveContainer" containerID="23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a" Dec 02 15:21:55 crc kubenswrapper[4900]: E1202 15:21:55.697849 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a\": container with ID starting with 23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a not found: ID does not exist" containerID="23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.697879 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a"} err="failed to get container status \"23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a\": rpc error: code = NotFound desc = could not find container \"23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a\": container with ID starting with 23cd374a6f7d3d9dd2ca6d590a55a7c2f36cc6a0069c5ae5dcd9bed68065b55a not found: ID does not exist" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.697895 4900 scope.go:117] "RemoveContainer" containerID="2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78" Dec 02 15:21:55 crc kubenswrapper[4900]: E1202 15:21:55.698246 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78\": container with ID starting with 2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78 not found: ID does not exist" containerID="2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78" Dec 02 15:21:55 crc kubenswrapper[4900]: I1202 15:21:55.698265 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78"} err="failed to get container status \"2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78\": rpc error: code = NotFound desc = could not find container \"2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78\": container with ID starting with 2e01ca6c2841f8727c3f9296636a94f6cfcd094b2d1a1fdea3af0bcc61138a78 not found: ID does not exist" Dec 02 15:21:56 crc kubenswrapper[4900]: I1202 15:21:56.910534 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:21:56 crc kubenswrapper[4900]: E1202 15:21:56.911431 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:21:56 crc kubenswrapper[4900]: I1202 15:21:56.935116 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48868fea-dd4b-449f-b079-40e16659c62a" path="/var/lib/kubelet/pods/48868fea-dd4b-449f-b079-40e16659c62a/volumes" Dec 02 15:22:10 crc kubenswrapper[4900]: I1202 15:22:10.914841 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:22:10 crc kubenswrapper[4900]: E1202 15:22:10.915478 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.387287 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-94564bc7-b8btv"] Dec 02 15:22:14 crc kubenswrapper[4900]: E1202 15:22:14.388600 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.388626 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4900]: E1202 15:22:14.388677 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="extract-content" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.388687 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="extract-content" Dec 02 15:22:14 crc kubenswrapper[4900]: E1202 15:22:14.388705 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="extract-utilities" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.388717 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="extract-utilities" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.389039 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="48868fea-dd4b-449f-b079-40e16659c62a" containerName="registry-server" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.390719 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.399573 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.400014 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.400221 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.400527 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-twtn2" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.407841 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-94564bc7-b8btv"] Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.482981 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.483264 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-log" containerID="cri-o://b7021b75705609d151ef2585d725f07a66a29c49f4974eb71b48073ed5a2945c" gracePeriod=30 Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.483421 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-httpd" containerID="cri-o://22a90fc032e37a735a851e66367f1ce0f4604bcd4540471ce99135f8d8c59a59" gracePeriod=30 Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.506085 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85b6d7d85f-k4b95"] Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.509195 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.516790 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.517073 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-log" containerID="cri-o://083e8c28c4970f3bdbd84e5e14659b349eb7c0a6e5f0316c1ff280bfeacf9e8f" gracePeriod=30 Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.517227 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-httpd" containerID="cri-o://182a284ab3b298b51861a8fa2aca2b7033022e8be81e6bf8097586e9df332175" gracePeriod=30 Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.547962 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85b6d7d85f-k4b95"] Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.568436 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-scripts\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.568502 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e662316f-b3c6-471c-a87c-f5cc7a402917-horizon-secret-key\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.568531 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-config-data\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.568562 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qk5s\" (UniqueName: \"kubernetes.io/projected/e662316f-b3c6-471c-a87c-f5cc7a402917-kube-api-access-8qk5s\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.568698 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e662316f-b3c6-471c-a87c-f5cc7a402917-logs\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670511 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e662316f-b3c6-471c-a87c-f5cc7a402917-logs\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670635 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64ae1240-801e-4932-a917-4f4b249a6283-logs\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670731 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-scripts\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670754 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48vz\" (UniqueName: \"kubernetes.io/projected/64ae1240-801e-4932-a917-4f4b249a6283-kube-api-access-f48vz\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670811 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64ae1240-801e-4932-a917-4f4b249a6283-horizon-secret-key\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670836 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e662316f-b3c6-471c-a87c-f5cc7a402917-horizon-secret-key\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670874 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-config-data\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670902 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-config-data\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670936 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qk5s\" (UniqueName: \"kubernetes.io/projected/e662316f-b3c6-471c-a87c-f5cc7a402917-kube-api-access-8qk5s\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670971 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-scripts\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.670980 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e662316f-b3c6-471c-a87c-f5cc7a402917-logs\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.671353 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-scripts\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.672113 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-config-data\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.676146 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e662316f-b3c6-471c-a87c-f5cc7a402917-horizon-secret-key\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.685788 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qk5s\" (UniqueName: \"kubernetes.io/projected/e662316f-b3c6-471c-a87c-f5cc7a402917-kube-api-access-8qk5s\") pod \"horizon-94564bc7-b8btv\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.718335 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.774491 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-config-data\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.774564 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-scripts\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.774685 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64ae1240-801e-4932-a917-4f4b249a6283-logs\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.774720 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48vz\" (UniqueName: \"kubernetes.io/projected/64ae1240-801e-4932-a917-4f4b249a6283-kube-api-access-f48vz\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.774744 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64ae1240-801e-4932-a917-4f4b249a6283-horizon-secret-key\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.775364 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64ae1240-801e-4932-a917-4f4b249a6283-logs\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.775625 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-scripts\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.776418 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-config-data\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.781613 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64ae1240-801e-4932-a917-4f4b249a6283-horizon-secret-key\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.793506 4900 generic.go:334] "Generic (PLEG): container finished" podID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerID="083e8c28c4970f3bdbd84e5e14659b349eb7c0a6e5f0316c1ff280bfeacf9e8f" exitCode=143 Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.793575 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f","Type":"ContainerDied","Data":"083e8c28c4970f3bdbd84e5e14659b349eb7c0a6e5f0316c1ff280bfeacf9e8f"} Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.794239 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48vz\" (UniqueName: \"kubernetes.io/projected/64ae1240-801e-4932-a917-4f4b249a6283-kube-api-access-f48vz\") pod \"horizon-85b6d7d85f-k4b95\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.795098 4900 generic.go:334] "Generic (PLEG): container finished" podID="6199fc67-306a-4267-9303-4673b0145e06" containerID="b7021b75705609d151ef2585d725f07a66a29c49f4974eb71b48073ed5a2945c" exitCode=143 Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.795122 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6199fc67-306a-4267-9303-4673b0145e06","Type":"ContainerDied","Data":"b7021b75705609d151ef2585d725f07a66a29c49f4974eb71b48073ed5a2945c"} Dec 02 15:22:14 crc kubenswrapper[4900]: I1202 15:22:14.835504 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.155923 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-94564bc7-b8btv"] Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.178941 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85b6d7d85f-k4b95"] Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.202761 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d5c9bf6cf-lz9kl"] Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.204483 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.217452 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d5c9bf6cf-lz9kl"] Dec 02 15:22:15 crc kubenswrapper[4900]: W1202 15:22:15.303553 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64ae1240_801e_4932_a917_4f4b249a6283.slice/crio-78ec21bc3bfb27c75e6083da971f6652f6e91e0244fb90e18827b928f2c7f05e WatchSource:0}: Error finding container 78ec21bc3bfb27c75e6083da971f6652f6e91e0244fb90e18827b928f2c7f05e: Status 404 returned error can't find the container with id 78ec21bc3bfb27c75e6083da971f6652f6e91e0244fb90e18827b928f2c7f05e Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.307113 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85b6d7d85f-k4b95"] Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.387160 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-scripts\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.387548 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-config-data\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.387701 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d86072e1-840c-4704-be8a-0338ee314daa-horizon-secret-key\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.387754 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2qx\" (UniqueName: \"kubernetes.io/projected/d86072e1-840c-4704-be8a-0338ee314daa-kube-api-access-zg2qx\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.387906 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86072e1-840c-4704-be8a-0338ee314daa-logs\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.489850 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-scripts\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.489953 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-config-data\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.490003 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d86072e1-840c-4704-be8a-0338ee314daa-horizon-secret-key\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.490036 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2qx\" (UniqueName: \"kubernetes.io/projected/d86072e1-840c-4704-be8a-0338ee314daa-kube-api-access-zg2qx\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.490070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86072e1-840c-4704-be8a-0338ee314daa-logs\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.490468 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86072e1-840c-4704-be8a-0338ee314daa-logs\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.490959 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-scripts\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.491788 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-config-data\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.497458 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d86072e1-840c-4704-be8a-0338ee314daa-horizon-secret-key\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.511275 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2qx\" (UniqueName: \"kubernetes.io/projected/d86072e1-840c-4704-be8a-0338ee314daa-kube-api-access-zg2qx\") pod \"horizon-6d5c9bf6cf-lz9kl\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.531010 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.808603 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94564bc7-b8btv" event={"ID":"e662316f-b3c6-471c-a87c-f5cc7a402917","Type":"ContainerStarted","Data":"63e1f775ff13205283df15ad1442e00d81531a150f6c0bae2fb3487f3a977631"} Dec 02 15:22:15 crc kubenswrapper[4900]: I1202 15:22:15.812556 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85b6d7d85f-k4b95" event={"ID":"64ae1240-801e-4932-a917-4f4b249a6283","Type":"ContainerStarted","Data":"78ec21bc3bfb27c75e6083da971f6652f6e91e0244fb90e18827b928f2c7f05e"} Dec 02 15:22:16 crc kubenswrapper[4900]: I1202 15:22:16.158225 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d5c9bf6cf-lz9kl"] Dec 02 15:22:16 crc kubenswrapper[4900]: I1202 15:22:16.824512 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5c9bf6cf-lz9kl" event={"ID":"d86072e1-840c-4704-be8a-0338ee314daa","Type":"ContainerStarted","Data":"e58dc4e174eba824a6370f452323eff20efd89490f42daba589c51f185ed56df"} Dec 02 15:22:17 crc kubenswrapper[4900]: I1202 15:22:17.835759 4900 generic.go:334] "Generic (PLEG): container finished" podID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerID="182a284ab3b298b51861a8fa2aca2b7033022e8be81e6bf8097586e9df332175" exitCode=0 Dec 02 15:22:17 crc kubenswrapper[4900]: I1202 15:22:17.835851 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f","Type":"ContainerDied","Data":"182a284ab3b298b51861a8fa2aca2b7033022e8be81e6bf8097586e9df332175"} Dec 02 15:22:17 crc kubenswrapper[4900]: I1202 15:22:17.839077 4900 generic.go:334] "Generic (PLEG): container finished" podID="6199fc67-306a-4267-9303-4673b0145e06" containerID="22a90fc032e37a735a851e66367f1ce0f4604bcd4540471ce99135f8d8c59a59" exitCode=0 Dec 02 15:22:17 crc kubenswrapper[4900]: I1202 15:22:17.839109 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6199fc67-306a-4267-9303-4673b0145e06","Type":"ContainerDied","Data":"22a90fc032e37a735a851e66367f1ce0f4604bcd4540471ce99135f8d8c59a59"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.710030 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.762207 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867293 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-logs\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867388 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-config-data\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867433 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-ceph\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867484 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-892xs\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-kube-api-access-892xs\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867516 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-httpd-run\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867543 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-scripts\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867562 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-combined-ca-bundle\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867607 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-ceph\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867627 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-config-data\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867661 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-logs\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.867688 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-httpd-run\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.868291 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-scripts\") pod \"6199fc67-306a-4267-9303-4673b0145e06\" (UID: \"6199fc67-306a-4267-9303-4673b0145e06\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.868348 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-combined-ca-bundle\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.868380 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mpv7\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-kube-api-access-5mpv7\") pod \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\" (UID: \"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f\") " Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.869048 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.869311 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-logs" (OuterVolumeSpecName: "logs") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.869559 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.870469 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-logs" (OuterVolumeSpecName: "logs") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.872066 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-kube-api-access-892xs" (OuterVolumeSpecName: "kube-api-access-892xs") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "kube-api-access-892xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.872099 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-ceph" (OuterVolumeSpecName: "ceph") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.877403 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-scripts" (OuterVolumeSpecName: "scripts") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.880220 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-ceph" (OuterVolumeSpecName: "ceph") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.888171 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-scripts" (OuterVolumeSpecName: "scripts") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.888869 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-kube-api-access-5mpv7" (OuterVolumeSpecName: "kube-api-access-5mpv7") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "kube-api-access-5mpv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.912908 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:22:22 crc kubenswrapper[4900]: E1202 15:22:22.913327 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.934170 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.934568 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-85b6d7d85f-k4b95" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon-log" containerID="cri-o://cbbf28c817ccb9b01b8d648deb0ea0598765fad68fb6cda49cd8615ac69d58fb" gracePeriod=30 Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.935240 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-85b6d7d85f-k4b95" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon" containerID="cri-o://0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260" gracePeriod=30 Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.940103 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85b6d7d85f-k4b95" event={"ID":"64ae1240-801e-4932-a917-4f4b249a6283","Type":"ContainerStarted","Data":"0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.940280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85b6d7d85f-k4b95" event={"ID":"64ae1240-801e-4932-a917-4f4b249a6283","Type":"ContainerStarted","Data":"cbbf28c817ccb9b01b8d648deb0ea0598765fad68fb6cda49cd8615ac69d58fb"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.944726 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94564bc7-b8btv" event={"ID":"e662316f-b3c6-471c-a87c-f5cc7a402917","Type":"ContainerStarted","Data":"64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.944963 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94564bc7-b8btv" event={"ID":"e662316f-b3c6-471c-a87c-f5cc7a402917","Type":"ContainerStarted","Data":"aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.948725 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f","Type":"ContainerDied","Data":"e7268a0b9cf1af478276a5a2b8d1194a357cf06d9b29c741bbdaac4d6ee02242"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.948851 4900 scope.go:117] "RemoveContainer" containerID="182a284ab3b298b51861a8fa2aca2b7033022e8be81e6bf8097586e9df332175" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.948914 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.954397 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.954416 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6199fc67-306a-4267-9303-4673b0145e06","Type":"ContainerDied","Data":"cb39de123371bf578dab6e787cb88de8c5ee30d8ffd5f46b3f3a15d7ca85a356"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.955482 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-config-data" (OuterVolumeSpecName: "config-data") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.956595 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" (UID: "d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.957004 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-85b6d7d85f-k4b95" podStartSLOduration=1.9345238839999999 podStartE2EDuration="8.956989482s" podCreationTimestamp="2025-12-02 15:22:14 +0000 UTC" firstStartedPulling="2025-12-02 15:22:15.305215213 +0000 UTC m=+5980.721029064" lastFinishedPulling="2025-12-02 15:22:22.327680811 +0000 UTC m=+5987.743494662" observedRunningTime="2025-12-02 15:22:22.950604733 +0000 UTC m=+5988.366418594" watchObservedRunningTime="2025-12-02 15:22:22.956989482 +0000 UTC m=+5988.372803323" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.960511 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5c9bf6cf-lz9kl" event={"ID":"d86072e1-840c-4704-be8a-0338ee314daa","Type":"ContainerStarted","Data":"de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.960551 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5c9bf6cf-lz9kl" event={"ID":"d86072e1-840c-4704-be8a-0338ee314daa","Type":"ContainerStarted","Data":"760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661"} Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971028 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971056 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mpv7\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-kube-api-access-5mpv7\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971071 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971081 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971089 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-892xs\" (UniqueName: \"kubernetes.io/projected/6199fc67-306a-4267-9303-4673b0145e06-kube-api-access-892xs\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971098 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971105 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971113 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971120 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971128 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971135 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971143 4900 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6199fc67-306a-4267-9303-4673b0145e06-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.971151 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.981799 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-config-data" (OuterVolumeSpecName: "config-data") pod "6199fc67-306a-4267-9303-4673b0145e06" (UID: "6199fc67-306a-4267-9303-4673b0145e06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:22 crc kubenswrapper[4900]: I1202 15:22:22.990620 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-94564bc7-b8btv" podStartSLOduration=1.798213358 podStartE2EDuration="8.990599205s" podCreationTimestamp="2025-12-02 15:22:14 +0000 UTC" firstStartedPulling="2025-12-02 15:22:15.160019088 +0000 UTC m=+5980.575832939" lastFinishedPulling="2025-12-02 15:22:22.352404905 +0000 UTC m=+5987.768218786" observedRunningTime="2025-12-02 15:22:22.973836145 +0000 UTC m=+5988.389649996" watchObservedRunningTime="2025-12-02 15:22:22.990599205 +0000 UTC m=+5988.406413046" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.001195 4900 scope.go:117] "RemoveContainer" containerID="083e8c28c4970f3bdbd84e5e14659b349eb7c0a6e5f0316c1ff280bfeacf9e8f" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.012992 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podStartSLOduration=1.83980181 podStartE2EDuration="8.012970243s" podCreationTimestamp="2025-12-02 15:22:15 +0000 UTC" firstStartedPulling="2025-12-02 15:22:16.162112281 +0000 UTC m=+5981.577926172" lastFinishedPulling="2025-12-02 15:22:22.335280724 +0000 UTC m=+5987.751094605" observedRunningTime="2025-12-02 15:22:22.992731605 +0000 UTC m=+5988.408545466" watchObservedRunningTime="2025-12-02 15:22:23.012970243 +0000 UTC m=+5988.428784104" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.021851 4900 scope.go:117] "RemoveContainer" containerID="22a90fc032e37a735a851e66367f1ce0f4604bcd4540471ce99135f8d8c59a59" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.042512 4900 scope.go:117] "RemoveContainer" containerID="b7021b75705609d151ef2585d725f07a66a29c49f4974eb71b48073ed5a2945c" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.073472 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199fc67-306a-4267-9303-4673b0145e06-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.284743 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.306836 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.322207 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.342798 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.372494 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: E1202 15:22:23.373274 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-log" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.373297 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-log" Dec 02 15:22:23 crc kubenswrapper[4900]: E1202 15:22:23.373333 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-httpd" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.373340 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-httpd" Dec 02 15:22:23 crc kubenswrapper[4900]: E1202 15:22:23.373353 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-log" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.373360 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-log" Dec 02 15:22:23 crc kubenswrapper[4900]: E1202 15:22:23.373390 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-httpd" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.376738 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-httpd" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.377239 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-httpd" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.377267 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-log" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.377290 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6199fc67-306a-4267-9303-4673b0145e06" containerName="glance-log" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.377316 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" containerName="glance-httpd" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.379052 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.381400 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.381593 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.382152 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dn9lx" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.396910 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.416717 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.421287 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.427861 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.437389 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482287 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c587812a-dffe-46fc-8407-a102214416f7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482366 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c587812a-dffe-46fc-8407-a102214416f7-logs\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482387 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482472 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hglfv\" (UniqueName: \"kubernetes.io/projected/c587812a-dffe-46fc-8407-a102214416f7-kube-api-access-hglfv\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482493 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482509 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.482545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c587812a-dffe-46fc-8407-a102214416f7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.584794 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.584918 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/067e9541-7243-4e60-b233-1d180118c325-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.585111 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hglfv\" (UniqueName: \"kubernetes.io/projected/c587812a-dffe-46fc-8407-a102214416f7-kube-api-access-hglfv\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.585183 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.585246 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.586446 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067e9541-7243-4e60-b233-1d180118c325-logs\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.586520 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-scripts\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.586790 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c587812a-dffe-46fc-8407-a102214416f7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587254 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zfd\" (UniqueName: \"kubernetes.io/projected/067e9541-7243-4e60-b233-1d180118c325-kube-api-access-85zfd\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587326 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c587812a-dffe-46fc-8407-a102214416f7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587489 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c587812a-dffe-46fc-8407-a102214416f7-logs\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587550 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/067e9541-7243-4e60-b233-1d180118c325-ceph\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587605 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587818 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-config-data\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.587740 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c587812a-dffe-46fc-8407-a102214416f7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.588385 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c587812a-dffe-46fc-8407-a102214416f7-logs\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.592095 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c587812a-dffe-46fc-8407-a102214416f7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.592831 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.595850 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.604168 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hglfv\" (UniqueName: \"kubernetes.io/projected/c587812a-dffe-46fc-8407-a102214416f7-kube-api-access-hglfv\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.604914 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c587812a-dffe-46fc-8407-a102214416f7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c587812a-dffe-46fc-8407-a102214416f7\") " pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690011 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067e9541-7243-4e60-b233-1d180118c325-logs\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690061 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-scripts\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690126 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zfd\" (UniqueName: \"kubernetes.io/projected/067e9541-7243-4e60-b233-1d180118c325-kube-api-access-85zfd\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690196 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/067e9541-7243-4e60-b233-1d180118c325-ceph\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690241 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-config-data\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690276 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690306 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/067e9541-7243-4e60-b233-1d180118c325-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690545 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067e9541-7243-4e60-b233-1d180118c325-logs\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.690777 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/067e9541-7243-4e60-b233-1d180118c325-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.695574 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/067e9541-7243-4e60-b233-1d180118c325-ceph\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.696189 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-scripts\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.696670 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.699414 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067e9541-7243-4e60-b233-1d180118c325-config-data\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.714439 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zfd\" (UniqueName: \"kubernetes.io/projected/067e9541-7243-4e60-b233-1d180118c325-kube-api-access-85zfd\") pod \"glance-default-external-api-0\" (UID: \"067e9541-7243-4e60-b233-1d180118c325\") " pod="openstack/glance-default-external-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.723089 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:23 crc kubenswrapper[4900]: I1202 15:22:23.743320 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.342854 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 15:22:24 crc kubenswrapper[4900]: W1202 15:22:24.343144 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod067e9541_7243_4e60_b233_1d180118c325.slice/crio-9d2b8008985f82a9d7e6869f9587bfd9de3e7c61973f8175b3da285a5ad71e71 WatchSource:0}: Error finding container 9d2b8008985f82a9d7e6869f9587bfd9de3e7c61973f8175b3da285a5ad71e71: Status 404 returned error can't find the container with id 9d2b8008985f82a9d7e6869f9587bfd9de3e7c61973f8175b3da285a5ad71e71 Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.476938 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 15:22:24 crc kubenswrapper[4900]: W1202 15:22:24.488745 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc587812a_dffe_46fc_8407_a102214416f7.slice/crio-1c5f456a9f74918f44bb8131d83385eeb12fe3308c3e66236e07e8524ad8883e WatchSource:0}: Error finding container 1c5f456a9f74918f44bb8131d83385eeb12fe3308c3e66236e07e8524ad8883e: Status 404 returned error can't find the container with id 1c5f456a9f74918f44bb8131d83385eeb12fe3308c3e66236e07e8524ad8883e Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.719227 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.719291 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.838195 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.925881 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6199fc67-306a-4267-9303-4673b0145e06" path="/var/lib/kubelet/pods/6199fc67-306a-4267-9303-4673b0145e06/volumes" Dec 02 15:22:24 crc kubenswrapper[4900]: I1202 15:22:24.927256 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f" path="/var/lib/kubelet/pods/d6ff2f5d-0868-4d36-a7d0-ba2b2e73c12f/volumes" Dec 02 15:22:25 crc kubenswrapper[4900]: I1202 15:22:25.008446 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c587812a-dffe-46fc-8407-a102214416f7","Type":"ContainerStarted","Data":"1c5f456a9f74918f44bb8131d83385eeb12fe3308c3e66236e07e8524ad8883e"} Dec 02 15:22:25 crc kubenswrapper[4900]: I1202 15:22:25.013745 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"067e9541-7243-4e60-b233-1d180118c325","Type":"ContainerStarted","Data":"9d2b8008985f82a9d7e6869f9587bfd9de3e7c61973f8175b3da285a5ad71e71"} Dec 02 15:22:25 crc kubenswrapper[4900]: I1202 15:22:25.532749 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:25 crc kubenswrapper[4900]: I1202 15:22:25.533047 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:26 crc kubenswrapper[4900]: I1202 15:22:26.025251 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"067e9541-7243-4e60-b233-1d180118c325","Type":"ContainerStarted","Data":"616db95c3a1dd27534f26ae0042a1371ce869027cf9a8d42fe20948929380ee0"} Dec 02 15:22:26 crc kubenswrapper[4900]: I1202 15:22:26.025519 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"067e9541-7243-4e60-b233-1d180118c325","Type":"ContainerStarted","Data":"e8275fcd1cc107baeeb4d65784fe4bf4bbd8a80e507ed91eed044f248cddbbb8"} Dec 02 15:22:26 crc kubenswrapper[4900]: I1202 15:22:26.028853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c587812a-dffe-46fc-8407-a102214416f7","Type":"ContainerStarted","Data":"79255a23cb5723f0b59d65e8fc54d6b1d819f36d9b8b59d05efdf09b7586d663"} Dec 02 15:22:26 crc kubenswrapper[4900]: I1202 15:22:26.028875 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c587812a-dffe-46fc-8407-a102214416f7","Type":"ContainerStarted","Data":"f865c3e679d86c2f2c1c3338ac0b2737a8260fd6d4f797c1219eef1b2a87ad8e"} Dec 02 15:22:26 crc kubenswrapper[4900]: I1202 15:22:26.050221 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.050194119 podStartE2EDuration="3.050194119s" podCreationTimestamp="2025-12-02 15:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:22:26.046386552 +0000 UTC m=+5991.462200403" watchObservedRunningTime="2025-12-02 15:22:26.050194119 +0000 UTC m=+5991.466008000" Dec 02 15:22:26 crc kubenswrapper[4900]: I1202 15:22:26.084024 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.084008768 podStartE2EDuration="3.084008768s" podCreationTimestamp="2025-12-02 15:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:22:26.08300987 +0000 UTC m=+5991.498823721" watchObservedRunningTime="2025-12-02 15:22:26.084008768 +0000 UTC m=+5991.499822619" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.724547 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.725270 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.743935 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.743977 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.766300 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.766781 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.802812 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 15:22:33 crc kubenswrapper[4900]: I1202 15:22:33.811459 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 15:22:34 crc kubenswrapper[4900]: I1202 15:22:34.128967 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 15:22:34 crc kubenswrapper[4900]: I1202 15:22:34.128997 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:34 crc kubenswrapper[4900]: I1202 15:22:34.129010 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 15:22:34 crc kubenswrapper[4900]: I1202 15:22:34.129115 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:34 crc kubenswrapper[4900]: I1202 15:22:34.720759 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-94564bc7-b8btv" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Dec 02 15:22:35 crc kubenswrapper[4900]: I1202 15:22:35.535359 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.113936 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.158489 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.163164 4900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.163196 4900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.333974 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.337509 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 15:22:36 crc kubenswrapper[4900]: I1202 15:22:36.910972 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:22:36 crc kubenswrapper[4900]: E1202 15:22:36.911479 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:22:46 crc kubenswrapper[4900]: I1202 15:22:46.383232 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:47 crc kubenswrapper[4900]: I1202 15:22:47.080710 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9402-account-create-update-rvfbn"] Dec 02 15:22:47 crc kubenswrapper[4900]: I1202 15:22:47.090463 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xdzd5"] Dec 02 15:22:47 crc kubenswrapper[4900]: I1202 15:22:47.102431 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9402-account-create-update-rvfbn"] Dec 02 15:22:47 crc kubenswrapper[4900]: I1202 15:22:47.113193 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xdzd5"] Dec 02 15:22:47 crc kubenswrapper[4900]: I1202 15:22:47.310974 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:48 crc kubenswrapper[4900]: I1202 15:22:48.056379 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:22:48 crc kubenswrapper[4900]: I1202 15:22:48.938363 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b417b22a-6ddb-4537-a954-80ed6f20ab40" path="/var/lib/kubelet/pods/b417b22a-6ddb-4537-a954-80ed6f20ab40/volumes" Dec 02 15:22:48 crc kubenswrapper[4900]: I1202 15:22:48.939078 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4759518-d3b9-4007-be35-dc9b410f3a84" path="/var/lib/kubelet/pods/c4759518-d3b9-4007-be35-dc9b410f3a84/volumes" Dec 02 15:22:49 crc kubenswrapper[4900]: I1202 15:22:49.080190 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:22:49 crc kubenswrapper[4900]: I1202 15:22:49.149515 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-94564bc7-b8btv"] Dec 02 15:22:49 crc kubenswrapper[4900]: I1202 15:22:49.150309 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-94564bc7-b8btv" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon-log" containerID="cri-o://aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20" gracePeriod=30 Dec 02 15:22:49 crc kubenswrapper[4900]: I1202 15:22:49.150461 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-94564bc7-b8btv" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" containerID="cri-o://64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362" gracePeriod=30 Dec 02 15:22:50 crc kubenswrapper[4900]: I1202 15:22:50.910481 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:22:51 crc kubenswrapper[4900]: I1202 15:22:51.341817 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"5b89bde9e30bda55f0fc8913241034f85509c78b8e0ea65f7e6e475647a12267"} Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.047570 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2jbhb"] Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.058832 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2jbhb"] Dec 02 15:22:53 crc kubenswrapper[4900]: E1202 15:22:53.263939 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64ae1240_801e_4932_a917_4f4b249a6283.slice/crio-conmon-cbbf28c817ccb9b01b8d648deb0ea0598765fad68fb6cda49cd8615ac69d58fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64ae1240_801e_4932_a917_4f4b249a6283.slice/crio-conmon-0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64ae1240_801e_4932_a917_4f4b249a6283.slice/crio-0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.372321 4900 generic.go:334] "Generic (PLEG): container finished" podID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerID="64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362" exitCode=0 Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.372377 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94564bc7-b8btv" event={"ID":"e662316f-b3c6-471c-a87c-f5cc7a402917","Type":"ContainerDied","Data":"64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362"} Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.374838 4900 generic.go:334] "Generic (PLEG): container finished" podID="64ae1240-801e-4932-a917-4f4b249a6283" containerID="0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260" exitCode=137 Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.374863 4900 generic.go:334] "Generic (PLEG): container finished" podID="64ae1240-801e-4932-a917-4f4b249a6283" containerID="cbbf28c817ccb9b01b8d648deb0ea0598765fad68fb6cda49cd8615ac69d58fb" exitCode=137 Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.374884 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85b6d7d85f-k4b95" event={"ID":"64ae1240-801e-4932-a917-4f4b249a6283","Type":"ContainerDied","Data":"0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260"} Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.374906 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85b6d7d85f-k4b95" event={"ID":"64ae1240-801e-4932-a917-4f4b249a6283","Type":"ContainerDied","Data":"cbbf28c817ccb9b01b8d648deb0ea0598765fad68fb6cda49cd8615ac69d58fb"} Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.479421 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.612166 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-config-data\") pod \"64ae1240-801e-4932-a917-4f4b249a6283\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.612757 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64ae1240-801e-4932-a917-4f4b249a6283-logs\") pod \"64ae1240-801e-4932-a917-4f4b249a6283\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.612794 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-scripts\") pod \"64ae1240-801e-4932-a917-4f4b249a6283\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.612832 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48vz\" (UniqueName: \"kubernetes.io/projected/64ae1240-801e-4932-a917-4f4b249a6283-kube-api-access-f48vz\") pod \"64ae1240-801e-4932-a917-4f4b249a6283\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.612905 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64ae1240-801e-4932-a917-4f4b249a6283-horizon-secret-key\") pod \"64ae1240-801e-4932-a917-4f4b249a6283\" (UID: \"64ae1240-801e-4932-a917-4f4b249a6283\") " Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.613336 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ae1240-801e-4932-a917-4f4b249a6283-logs" (OuterVolumeSpecName: "logs") pod "64ae1240-801e-4932-a917-4f4b249a6283" (UID: "64ae1240-801e-4932-a917-4f4b249a6283"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.621831 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ae1240-801e-4932-a917-4f4b249a6283-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "64ae1240-801e-4932-a917-4f4b249a6283" (UID: "64ae1240-801e-4932-a917-4f4b249a6283"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.621920 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ae1240-801e-4932-a917-4f4b249a6283-kube-api-access-f48vz" (OuterVolumeSpecName: "kube-api-access-f48vz") pod "64ae1240-801e-4932-a917-4f4b249a6283" (UID: "64ae1240-801e-4932-a917-4f4b249a6283"). InnerVolumeSpecName "kube-api-access-f48vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.639571 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-scripts" (OuterVolumeSpecName: "scripts") pod "64ae1240-801e-4932-a917-4f4b249a6283" (UID: "64ae1240-801e-4932-a917-4f4b249a6283"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.642312 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-config-data" (OuterVolumeSpecName: "config-data") pod "64ae1240-801e-4932-a917-4f4b249a6283" (UID: "64ae1240-801e-4932-a917-4f4b249a6283"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.715198 4900 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64ae1240-801e-4932-a917-4f4b249a6283-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.715249 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.715265 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64ae1240-801e-4932-a917-4f4b249a6283-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.715282 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64ae1240-801e-4932-a917-4f4b249a6283-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:53 crc kubenswrapper[4900]: I1202 15:22:53.715298 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48vz\" (UniqueName: \"kubernetes.io/projected/64ae1240-801e-4932-a917-4f4b249a6283-kube-api-access-f48vz\") on node \"crc\" DevicePath \"\"" Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.389767 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85b6d7d85f-k4b95" event={"ID":"64ae1240-801e-4932-a917-4f4b249a6283","Type":"ContainerDied","Data":"78ec21bc3bfb27c75e6083da971f6652f6e91e0244fb90e18827b928f2c7f05e"} Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.390099 4900 scope.go:117] "RemoveContainer" containerID="0e94398762c723157a9ded0ddf3ec835a5a09cbceb56923a46e3aebb1ca1a260" Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.389893 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85b6d7d85f-k4b95" Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.439832 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85b6d7d85f-k4b95"] Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.447694 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85b6d7d85f-k4b95"] Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.598000 4900 scope.go:117] "RemoveContainer" containerID="cbbf28c817ccb9b01b8d648deb0ea0598765fad68fb6cda49cd8615ac69d58fb" Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.719259 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-94564bc7-b8btv" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.928753 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c641e8-f462-45b4-9626-297966f19298" path="/var/lib/kubelet/pods/60c641e8-f462-45b4-9626-297966f19298/volumes" Dec 02 15:22:54 crc kubenswrapper[4900]: I1202 15:22:54.929768 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ae1240-801e-4932-a917-4f4b249a6283" path="/var/lib/kubelet/pods/64ae1240-801e-4932-a917-4f4b249a6283/volumes" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.797929 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b86d75b6f-gp8ml"] Dec 02 15:22:56 crc kubenswrapper[4900]: E1202 15:22:56.809118 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon-log" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.809170 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon-log" Dec 02 15:22:56 crc kubenswrapper[4900]: E1202 15:22:56.809249 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.809261 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.810515 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon-log" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.810584 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ae1240-801e-4932-a917-4f4b249a6283" containerName="horizon" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.814093 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.839145 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b86d75b6f-gp8ml"] Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.900089 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf781a9-0950-4d20-8ee9-9b4fa0305657-logs\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.900149 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxbl\" (UniqueName: \"kubernetes.io/projected/1bf781a9-0950-4d20-8ee9-9b4fa0305657-kube-api-access-hrxbl\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.900175 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf781a9-0950-4d20-8ee9-9b4fa0305657-config-data\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.900200 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bf781a9-0950-4d20-8ee9-9b4fa0305657-scripts\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:56 crc kubenswrapper[4900]: I1202 15:22:56.900316 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bf781a9-0950-4d20-8ee9-9b4fa0305657-horizon-secret-key\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.001887 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bf781a9-0950-4d20-8ee9-9b4fa0305657-horizon-secret-key\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.002022 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf781a9-0950-4d20-8ee9-9b4fa0305657-logs\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.002063 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrxbl\" (UniqueName: \"kubernetes.io/projected/1bf781a9-0950-4d20-8ee9-9b4fa0305657-kube-api-access-hrxbl\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.002094 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf781a9-0950-4d20-8ee9-9b4fa0305657-config-data\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.002146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bf781a9-0950-4d20-8ee9-9b4fa0305657-scripts\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.004409 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf781a9-0950-4d20-8ee9-9b4fa0305657-config-data\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.004736 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf781a9-0950-4d20-8ee9-9b4fa0305657-logs\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.004827 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bf781a9-0950-4d20-8ee9-9b4fa0305657-scripts\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.015304 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bf781a9-0950-4d20-8ee9-9b4fa0305657-horizon-secret-key\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.031209 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrxbl\" (UniqueName: \"kubernetes.io/projected/1bf781a9-0950-4d20-8ee9-9b4fa0305657-kube-api-access-hrxbl\") pod \"horizon-7b86d75b6f-gp8ml\" (UID: \"1bf781a9-0950-4d20-8ee9-9b4fa0305657\") " pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.156168 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:22:57 crc kubenswrapper[4900]: I1202 15:22:57.664517 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b86d75b6f-gp8ml"] Dec 02 15:22:57 crc kubenswrapper[4900]: W1202 15:22:57.666263 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf781a9_0950_4d20_8ee9_9b4fa0305657.slice/crio-725b5d3f8c144ca22352e57307ec53a5e8cf538b12f3d1018bb123222efb5345 WatchSource:0}: Error finding container 725b5d3f8c144ca22352e57307ec53a5e8cf538b12f3d1018bb123222efb5345: Status 404 returned error can't find the container with id 725b5d3f8c144ca22352e57307ec53a5e8cf538b12f3d1018bb123222efb5345 Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.138868 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-84gn7"] Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.140692 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.148629 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-84gn7"] Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.237375 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28103557-41db-491f-99c1-3f122972f6b9-operator-scripts\") pod \"heat-db-create-84gn7\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.237457 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsl7f\" (UniqueName: \"kubernetes.io/projected/28103557-41db-491f-99c1-3f122972f6b9-kube-api-access-zsl7f\") pod \"heat-db-create-84gn7\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.240075 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c5e2-account-create-update-npvr6"] Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.241476 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.243447 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.250413 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c5e2-account-create-update-npvr6"] Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.339610 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52132994-d9fc-4431-8497-a23f6c6dc7e5-operator-scripts\") pod \"heat-c5e2-account-create-update-npvr6\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.339770 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4wml\" (UniqueName: \"kubernetes.io/projected/52132994-d9fc-4431-8497-a23f6c6dc7e5-kube-api-access-x4wml\") pod \"heat-c5e2-account-create-update-npvr6\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.339858 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28103557-41db-491f-99c1-3f122972f6b9-operator-scripts\") pod \"heat-db-create-84gn7\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.339915 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsl7f\" (UniqueName: \"kubernetes.io/projected/28103557-41db-491f-99c1-3f122972f6b9-kube-api-access-zsl7f\") pod \"heat-db-create-84gn7\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.340624 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28103557-41db-491f-99c1-3f122972f6b9-operator-scripts\") pod \"heat-db-create-84gn7\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.362487 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsl7f\" (UniqueName: \"kubernetes.io/projected/28103557-41db-491f-99c1-3f122972f6b9-kube-api-access-zsl7f\") pod \"heat-db-create-84gn7\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.440853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b86d75b6f-gp8ml" event={"ID":"1bf781a9-0950-4d20-8ee9-9b4fa0305657","Type":"ContainerStarted","Data":"0988cfcaa9b76910c2c70f615b355b75c89c68d44917519d13dfb268abbe0fb7"} Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.440894 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b86d75b6f-gp8ml" event={"ID":"1bf781a9-0950-4d20-8ee9-9b4fa0305657","Type":"ContainerStarted","Data":"6084de196faf708b2aba07de600bc5317430ac3bd18f1fc99266b8f6f5418d99"} Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.440904 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b86d75b6f-gp8ml" event={"ID":"1bf781a9-0950-4d20-8ee9-9b4fa0305657","Type":"ContainerStarted","Data":"725b5d3f8c144ca22352e57307ec53a5e8cf538b12f3d1018bb123222efb5345"} Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.442006 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52132994-d9fc-4431-8497-a23f6c6dc7e5-operator-scripts\") pod \"heat-c5e2-account-create-update-npvr6\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.442089 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4wml\" (UniqueName: \"kubernetes.io/projected/52132994-d9fc-4431-8497-a23f6c6dc7e5-kube-api-access-x4wml\") pod \"heat-c5e2-account-create-update-npvr6\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.442831 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52132994-d9fc-4431-8497-a23f6c6dc7e5-operator-scripts\") pod \"heat-c5e2-account-create-update-npvr6\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.458285 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4wml\" (UniqueName: \"kubernetes.io/projected/52132994-d9fc-4431-8497-a23f6c6dc7e5-kube-api-access-x4wml\") pod \"heat-c5e2-account-create-update-npvr6\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.458700 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b86d75b6f-gp8ml" podStartSLOduration=2.458682948 podStartE2EDuration="2.458682948s" podCreationTimestamp="2025-12-02 15:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:22:58.458020889 +0000 UTC m=+6023.873834740" watchObservedRunningTime="2025-12-02 15:22:58.458682948 +0000 UTC m=+6023.874496799" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.458854 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-84gn7" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.560855 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:22:58 crc kubenswrapper[4900]: I1202 15:22:58.893515 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-84gn7"] Dec 02 15:22:58 crc kubenswrapper[4900]: W1202 15:22:58.894639 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28103557_41db_491f_99c1_3f122972f6b9.slice/crio-cc869bc3209196edfddd0449a941d73533001a90a8be0d128c992979e430cb31 WatchSource:0}: Error finding container cc869bc3209196edfddd0449a941d73533001a90a8be0d128c992979e430cb31: Status 404 returned error can't find the container with id cc869bc3209196edfddd0449a941d73533001a90a8be0d128c992979e430cb31 Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.065378 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c5e2-account-create-update-npvr6"] Dec 02 15:22:59 crc kubenswrapper[4900]: W1202 15:22:59.068770 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52132994_d9fc_4431_8497_a23f6c6dc7e5.slice/crio-3da602b69ac87d4ca25f904b7908f9a22638661efc9eecc055d61eb58a893ce2 WatchSource:0}: Error finding container 3da602b69ac87d4ca25f904b7908f9a22638661efc9eecc055d61eb58a893ce2: Status 404 returned error can't find the container with id 3da602b69ac87d4ca25f904b7908f9a22638661efc9eecc055d61eb58a893ce2 Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.453657 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c5e2-account-create-update-npvr6" event={"ID":"52132994-d9fc-4431-8497-a23f6c6dc7e5","Type":"ContainerStarted","Data":"2245a909a05736d67ddd2bdc28d65513fdc230abb893db3a2c9cb8c673a8d48c"} Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.454029 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c5e2-account-create-update-npvr6" event={"ID":"52132994-d9fc-4431-8497-a23f6c6dc7e5","Type":"ContainerStarted","Data":"3da602b69ac87d4ca25f904b7908f9a22638661efc9eecc055d61eb58a893ce2"} Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.462313 4900 generic.go:334] "Generic (PLEG): container finished" podID="28103557-41db-491f-99c1-3f122972f6b9" containerID="33293103028a8ccc5ba7daf0de7d4c5449b481fa15763d86507046fbaf2864a2" exitCode=0 Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.462781 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-84gn7" event={"ID":"28103557-41db-491f-99c1-3f122972f6b9","Type":"ContainerDied","Data":"33293103028a8ccc5ba7daf0de7d4c5449b481fa15763d86507046fbaf2864a2"} Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.462842 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-84gn7" event={"ID":"28103557-41db-491f-99c1-3f122972f6b9","Type":"ContainerStarted","Data":"cc869bc3209196edfddd0449a941d73533001a90a8be0d128c992979e430cb31"} Dec 02 15:22:59 crc kubenswrapper[4900]: I1202 15:22:59.478613 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-c5e2-account-create-update-npvr6" podStartSLOduration=1.478589761 podStartE2EDuration="1.478589761s" podCreationTimestamp="2025-12-02 15:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:22:59.472885941 +0000 UTC m=+6024.888699802" watchObservedRunningTime="2025-12-02 15:22:59.478589761 +0000 UTC m=+6024.894403622" Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.475190 4900 generic.go:334] "Generic (PLEG): container finished" podID="52132994-d9fc-4431-8497-a23f6c6dc7e5" containerID="2245a909a05736d67ddd2bdc28d65513fdc230abb893db3a2c9cb8c673a8d48c" exitCode=0 Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.475621 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c5e2-account-create-update-npvr6" event={"ID":"52132994-d9fc-4431-8497-a23f6c6dc7e5","Type":"ContainerDied","Data":"2245a909a05736d67ddd2bdc28d65513fdc230abb893db3a2c9cb8c673a8d48c"} Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.894076 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-84gn7" Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.991369 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsl7f\" (UniqueName: \"kubernetes.io/projected/28103557-41db-491f-99c1-3f122972f6b9-kube-api-access-zsl7f\") pod \"28103557-41db-491f-99c1-3f122972f6b9\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.991569 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28103557-41db-491f-99c1-3f122972f6b9-operator-scripts\") pod \"28103557-41db-491f-99c1-3f122972f6b9\" (UID: \"28103557-41db-491f-99c1-3f122972f6b9\") " Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.992029 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28103557-41db-491f-99c1-3f122972f6b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28103557-41db-491f-99c1-3f122972f6b9" (UID: "28103557-41db-491f-99c1-3f122972f6b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:23:00 crc kubenswrapper[4900]: I1202 15:23:00.996372 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28103557-41db-491f-99c1-3f122972f6b9-kube-api-access-zsl7f" (OuterVolumeSpecName: "kube-api-access-zsl7f") pod "28103557-41db-491f-99c1-3f122972f6b9" (UID: "28103557-41db-491f-99c1-3f122972f6b9"). InnerVolumeSpecName "kube-api-access-zsl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:23:01 crc kubenswrapper[4900]: I1202 15:23:01.093199 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28103557-41db-491f-99c1-3f122972f6b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:01 crc kubenswrapper[4900]: I1202 15:23:01.093230 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsl7f\" (UniqueName: \"kubernetes.io/projected/28103557-41db-491f-99c1-3f122972f6b9-kube-api-access-zsl7f\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:01 crc kubenswrapper[4900]: I1202 15:23:01.496177 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-84gn7" Dec 02 15:23:01 crc kubenswrapper[4900]: I1202 15:23:01.496269 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-84gn7" event={"ID":"28103557-41db-491f-99c1-3f122972f6b9","Type":"ContainerDied","Data":"cc869bc3209196edfddd0449a941d73533001a90a8be0d128c992979e430cb31"} Dec 02 15:23:01 crc kubenswrapper[4900]: I1202 15:23:01.496617 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc869bc3209196edfddd0449a941d73533001a90a8be0d128c992979e430cb31" Dec 02 15:23:01 crc kubenswrapper[4900]: I1202 15:23:01.898890 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.011754 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52132994-d9fc-4431-8497-a23f6c6dc7e5-operator-scripts\") pod \"52132994-d9fc-4431-8497-a23f6c6dc7e5\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.011852 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4wml\" (UniqueName: \"kubernetes.io/projected/52132994-d9fc-4431-8497-a23f6c6dc7e5-kube-api-access-x4wml\") pod \"52132994-d9fc-4431-8497-a23f6c6dc7e5\" (UID: \"52132994-d9fc-4431-8497-a23f6c6dc7e5\") " Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.012692 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52132994-d9fc-4431-8497-a23f6c6dc7e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52132994-d9fc-4431-8497-a23f6c6dc7e5" (UID: "52132994-d9fc-4431-8497-a23f6c6dc7e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.024853 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52132994-d9fc-4431-8497-a23f6c6dc7e5-kube-api-access-x4wml" (OuterVolumeSpecName: "kube-api-access-x4wml") pod "52132994-d9fc-4431-8497-a23f6c6dc7e5" (UID: "52132994-d9fc-4431-8497-a23f6c6dc7e5"). InnerVolumeSpecName "kube-api-access-x4wml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.114821 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52132994-d9fc-4431-8497-a23f6c6dc7e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.114859 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4wml\" (UniqueName: \"kubernetes.io/projected/52132994-d9fc-4431-8497-a23f6c6dc7e5-kube-api-access-x4wml\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.519537 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c5e2-account-create-update-npvr6" event={"ID":"52132994-d9fc-4431-8497-a23f6c6dc7e5","Type":"ContainerDied","Data":"3da602b69ac87d4ca25f904b7908f9a22638661efc9eecc055d61eb58a893ce2"} Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.519598 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da602b69ac87d4ca25f904b7908f9a22638661efc9eecc055d61eb58a893ce2" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.519685 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c5e2-account-create-update-npvr6" Dec 02 15:23:02 crc kubenswrapper[4900]: I1202 15:23:02.823286 4900 scope.go:117] "RemoveContainer" containerID="c13ad78cec7772de4e1d11a116992e6d760febb7a46b19423b2a47db2f15dfc8" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.102312 4900 scope.go:117] "RemoveContainer" containerID="49d9a2a3e82197544076cf9dca3e24961c69e9dead8f2ada55e767769b5f3d6e" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.161936 4900 scope.go:117] "RemoveContainer" containerID="fb58572bacff42ba442cba8b54a32d25469aa19716a4be3c10233cf09f3bd888" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.355685 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-69fcs"] Dec 02 15:23:03 crc kubenswrapper[4900]: E1202 15:23:03.356063 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28103557-41db-491f-99c1-3f122972f6b9" containerName="mariadb-database-create" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.356075 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="28103557-41db-491f-99c1-3f122972f6b9" containerName="mariadb-database-create" Dec 02 15:23:03 crc kubenswrapper[4900]: E1202 15:23:03.356092 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52132994-d9fc-4431-8497-a23f6c6dc7e5" containerName="mariadb-account-create-update" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.356098 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="52132994-d9fc-4431-8497-a23f6c6dc7e5" containerName="mariadb-account-create-update" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.356274 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="52132994-d9fc-4431-8497-a23f6c6dc7e5" containerName="mariadb-account-create-update" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.356291 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="28103557-41db-491f-99c1-3f122972f6b9" containerName="mariadb-database-create" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.357166 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.368188 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mrb9l" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.368430 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.372972 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-69fcs"] Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.546440 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzplj\" (UniqueName: \"kubernetes.io/projected/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-kube-api-access-pzplj\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.546564 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-config-data\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.546615 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-combined-ca-bundle\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.648485 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-config-data\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.648554 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-combined-ca-bundle\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.648681 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzplj\" (UniqueName: \"kubernetes.io/projected/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-kube-api-access-pzplj\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.656163 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-combined-ca-bundle\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.657740 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-config-data\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.666287 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzplj\" (UniqueName: \"kubernetes.io/projected/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-kube-api-access-pzplj\") pod \"heat-db-sync-69fcs\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:03 crc kubenswrapper[4900]: I1202 15:23:03.691689 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:04 crc kubenswrapper[4900]: I1202 15:23:04.169774 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-69fcs"] Dec 02 15:23:04 crc kubenswrapper[4900]: I1202 15:23:04.177710 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:23:04 crc kubenswrapper[4900]: I1202 15:23:04.546130 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-69fcs" event={"ID":"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587","Type":"ContainerStarted","Data":"93f823ebbb2b556515c13cb35f0e02f6673e3fdf6cf714980e05b2620f0e981e"} Dec 02 15:23:04 crc kubenswrapper[4900]: I1202 15:23:04.719402 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-94564bc7-b8btv" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Dec 02 15:23:07 crc kubenswrapper[4900]: I1202 15:23:07.157636 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:23:07 crc kubenswrapper[4900]: I1202 15:23:07.158013 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:23:12 crc kubenswrapper[4900]: I1202 15:23:12.625996 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-69fcs" event={"ID":"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587","Type":"ContainerStarted","Data":"4156734e4831055575ff3bec563a474c6ac6c35fa2b97e4cfcaa6c896586337d"} Dec 02 15:23:12 crc kubenswrapper[4900]: I1202 15:23:12.702036 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-69fcs" podStartSLOduration=2.509743478 podStartE2EDuration="9.702011191s" podCreationTimestamp="2025-12-02 15:23:03 +0000 UTC" firstStartedPulling="2025-12-02 15:23:04.177445039 +0000 UTC m=+6029.593258890" lastFinishedPulling="2025-12-02 15:23:11.369712712 +0000 UTC m=+6036.785526603" observedRunningTime="2025-12-02 15:23:12.698622136 +0000 UTC m=+6038.114436007" watchObservedRunningTime="2025-12-02 15:23:12.702011191 +0000 UTC m=+6038.117825052" Dec 02 15:23:14 crc kubenswrapper[4900]: I1202 15:23:14.656151 4900 generic.go:334] "Generic (PLEG): container finished" podID="8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" containerID="4156734e4831055575ff3bec563a474c6ac6c35fa2b97e4cfcaa6c896586337d" exitCode=0 Dec 02 15:23:14 crc kubenswrapper[4900]: I1202 15:23:14.656282 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-69fcs" event={"ID":"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587","Type":"ContainerDied","Data":"4156734e4831055575ff3bec563a474c6ac6c35fa2b97e4cfcaa6c896586337d"} Dec 02 15:23:14 crc kubenswrapper[4900]: I1202 15:23:14.719877 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-94564bc7-b8btv" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Dec 02 15:23:14 crc kubenswrapper[4900]: I1202 15:23:14.720020 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.100138 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.258688 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-config-data\") pod \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.258978 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzplj\" (UniqueName: \"kubernetes.io/projected/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-kube-api-access-pzplj\") pod \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.259173 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-combined-ca-bundle\") pod \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\" (UID: \"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587\") " Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.265539 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-kube-api-access-pzplj" (OuterVolumeSpecName: "kube-api-access-pzplj") pod "8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" (UID: "8064cca9-ba2e-4f86-b4c2-bbe0c03f3587"). InnerVolumeSpecName "kube-api-access-pzplj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.298133 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" (UID: "8064cca9-ba2e-4f86-b4c2-bbe0c03f3587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.332813 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-config-data" (OuterVolumeSpecName: "config-data") pod "8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" (UID: "8064cca9-ba2e-4f86-b4c2-bbe0c03f3587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.362440 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzplj\" (UniqueName: \"kubernetes.io/projected/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-kube-api-access-pzplj\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.362498 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.362517 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.678695 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-69fcs" event={"ID":"8064cca9-ba2e-4f86-b4c2-bbe0c03f3587","Type":"ContainerDied","Data":"93f823ebbb2b556515c13cb35f0e02f6673e3fdf6cf714980e05b2620f0e981e"} Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.678740 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f823ebbb2b556515c13cb35f0e02f6673e3fdf6cf714980e05b2620f0e981e" Dec 02 15:23:16 crc kubenswrapper[4900]: I1202 15:23:16.678805 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-69fcs" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.181698 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5fcb767956-nzh5b"] Dec 02 15:23:18 crc kubenswrapper[4900]: E1202 15:23:18.182749 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" containerName="heat-db-sync" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.182771 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" containerName="heat-db-sync" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.183125 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" containerName="heat-db-sync" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.184295 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.187241 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.191223 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mrb9l" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.191395 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.197747 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5fcb767956-nzh5b"] Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.204392 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-combined-ca-bundle\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.204520 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-config-data\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.204578 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-config-data-custom\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.204617 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4dd\" (UniqueName: \"kubernetes.io/projected/8eca1564-298d-433d-b601-43980f0dcf0a-kube-api-access-wl4dd\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.302762 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5cc74477-qtk82"] Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.306069 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-combined-ca-bundle\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.306430 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-config-data\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.306492 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-config-data-custom\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.306508 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl4dd\" (UniqueName: \"kubernetes.io/projected/8eca1564-298d-433d-b601-43980f0dcf0a-kube-api-access-wl4dd\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.309579 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.314388 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-combined-ca-bundle\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.316476 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.318511 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-config-data-custom\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.322879 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eca1564-298d-433d-b601-43980f0dcf0a-config-data\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.334250 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5cc74477-qtk82"] Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.348210 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7db49f5d4c-rjsh2"] Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.352894 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl4dd\" (UniqueName: \"kubernetes.io/projected/8eca1564-298d-433d-b601-43980f0dcf0a-kube-api-access-wl4dd\") pod \"heat-engine-5fcb767956-nzh5b\" (UID: \"8eca1564-298d-433d-b601-43980f0dcf0a\") " pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.354265 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.360468 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.372138 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7db49f5d4c-rjsh2"] Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.407824 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-combined-ca-bundle\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.407855 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z5j8\" (UniqueName: \"kubernetes.io/projected/1e5907eb-6866-41f5-81c2-a15c5d1b7379-kube-api-access-4z5j8\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.407887 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26sr\" (UniqueName: \"kubernetes.io/projected/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-kube-api-access-g26sr\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.409331 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-config-data\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.409582 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-config-data-custom\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.409813 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-combined-ca-bundle\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.410005 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-config-data\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.410155 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-config-data-custom\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511483 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-config-data\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511561 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-config-data-custom\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511596 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-combined-ca-bundle\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511622 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-config-data\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511682 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-config-data-custom\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511714 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-combined-ca-bundle\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511729 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z5j8\" (UniqueName: \"kubernetes.io/projected/1e5907eb-6866-41f5-81c2-a15c5d1b7379-kube-api-access-4z5j8\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.511755 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26sr\" (UniqueName: \"kubernetes.io/projected/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-kube-api-access-g26sr\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.516280 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-config-data-custom\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.521786 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-config-data\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.522032 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-config-data\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.522631 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.529359 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5907eb-6866-41f5-81c2-a15c5d1b7379-combined-ca-bundle\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.529587 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-combined-ca-bundle\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.531413 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-config-data-custom\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.532444 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26sr\" (UniqueName: \"kubernetes.io/projected/b7b3fd2e-0a49-42c3-a44e-cb79074ab660-kube-api-access-g26sr\") pod \"heat-cfnapi-5cc74477-qtk82\" (UID: \"b7b3fd2e-0a49-42c3-a44e-cb79074ab660\") " pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.533656 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z5j8\" (UniqueName: \"kubernetes.io/projected/1e5907eb-6866-41f5-81c2-a15c5d1b7379-kube-api-access-4z5j8\") pod \"heat-api-7db49f5d4c-rjsh2\" (UID: \"1e5907eb-6866-41f5-81c2-a15c5d1b7379\") " pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.721351 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:18 crc kubenswrapper[4900]: I1202 15:23:18.721670 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.017414 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5fcb767956-nzh5b"] Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.214254 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7db49f5d4c-rjsh2"] Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.269952 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.388753 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5cc74477-qtk82"] Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.613994 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.641459 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-scripts\") pod \"e662316f-b3c6-471c-a87c-f5cc7a402917\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.641564 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-config-data\") pod \"e662316f-b3c6-471c-a87c-f5cc7a402917\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.641601 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e662316f-b3c6-471c-a87c-f5cc7a402917-logs\") pod \"e662316f-b3c6-471c-a87c-f5cc7a402917\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.641675 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qk5s\" (UniqueName: \"kubernetes.io/projected/e662316f-b3c6-471c-a87c-f5cc7a402917-kube-api-access-8qk5s\") pod \"e662316f-b3c6-471c-a87c-f5cc7a402917\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.641773 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e662316f-b3c6-471c-a87c-f5cc7a402917-horizon-secret-key\") pod \"e662316f-b3c6-471c-a87c-f5cc7a402917\" (UID: \"e662316f-b3c6-471c-a87c-f5cc7a402917\") " Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.642162 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e662316f-b3c6-471c-a87c-f5cc7a402917-logs" (OuterVolumeSpecName: "logs") pod "e662316f-b3c6-471c-a87c-f5cc7a402917" (UID: "e662316f-b3c6-471c-a87c-f5cc7a402917"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.643104 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e662316f-b3c6-471c-a87c-f5cc7a402917-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.646702 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e662316f-b3c6-471c-a87c-f5cc7a402917-kube-api-access-8qk5s" (OuterVolumeSpecName: "kube-api-access-8qk5s") pod "e662316f-b3c6-471c-a87c-f5cc7a402917" (UID: "e662316f-b3c6-471c-a87c-f5cc7a402917"). InnerVolumeSpecName "kube-api-access-8qk5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.649728 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e662316f-b3c6-471c-a87c-f5cc7a402917-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e662316f-b3c6-471c-a87c-f5cc7a402917" (UID: "e662316f-b3c6-471c-a87c-f5cc7a402917"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.689358 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-config-data" (OuterVolumeSpecName: "config-data") pod "e662316f-b3c6-471c-a87c-f5cc7a402917" (UID: "e662316f-b3c6-471c-a87c-f5cc7a402917"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.694083 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-scripts" (OuterVolumeSpecName: "scripts") pod "e662316f-b3c6-471c-a87c-f5cc7a402917" (UID: "e662316f-b3c6-471c-a87c-f5cc7a402917"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.720737 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5cc74477-qtk82" event={"ID":"b7b3fd2e-0a49-42c3-a44e-cb79074ab660","Type":"ContainerStarted","Data":"87f4a314976cba33bfc5da89cf756323d03bd0c68e00a14208379792a957ab2f"} Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.728800 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7db49f5d4c-rjsh2" event={"ID":"1e5907eb-6866-41f5-81c2-a15c5d1b7379","Type":"ContainerStarted","Data":"9d87439e16e8d35e31f32d4450118653e45d69df6366e22ae2699eceb5f308dc"} Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.731413 4900 generic.go:334] "Generic (PLEG): container finished" podID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerID="aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20" exitCode=137 Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.731469 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94564bc7-b8btv" event={"ID":"e662316f-b3c6-471c-a87c-f5cc7a402917","Type":"ContainerDied","Data":"aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20"} Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.731492 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-94564bc7-b8btv" event={"ID":"e662316f-b3c6-471c-a87c-f5cc7a402917","Type":"ContainerDied","Data":"63e1f775ff13205283df15ad1442e00d81531a150f6c0bae2fb3487f3a977631"} Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.731510 4900 scope.go:117] "RemoveContainer" containerID="64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.731621 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-94564bc7-b8btv" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.746070 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.746114 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e662316f-b3c6-471c-a87c-f5cc7a402917-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.746127 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qk5s\" (UniqueName: \"kubernetes.io/projected/e662316f-b3c6-471c-a87c-f5cc7a402917-kube-api-access-8qk5s\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.746137 4900 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e662316f-b3c6-471c-a87c-f5cc7a402917-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.749904 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5fcb767956-nzh5b" event={"ID":"8eca1564-298d-433d-b601-43980f0dcf0a","Type":"ContainerStarted","Data":"431de067fcc5ec22e665da451bc550cdc21a523b3ffd6e68186159d7c5013375"} Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.749951 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5fcb767956-nzh5b" event={"ID":"8eca1564-298d-433d-b601-43980f0dcf0a","Type":"ContainerStarted","Data":"e84b13f93597545e4747b9484c2c5057dd93f1d19f8bc21e95237b3e9da237dd"} Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.751093 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.775687 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5fcb767956-nzh5b" podStartSLOduration=1.775670105 podStartE2EDuration="1.775670105s" podCreationTimestamp="2025-12-02 15:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:23:19.766496338 +0000 UTC m=+6045.182310179" watchObservedRunningTime="2025-12-02 15:23:19.775670105 +0000 UTC m=+6045.191483956" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.787698 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-94564bc7-b8btv"] Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.795182 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-94564bc7-b8btv"] Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.943488 4900 scope.go:117] "RemoveContainer" containerID="aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.966943 4900 scope.go:117] "RemoveContainer" containerID="64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362" Dec 02 15:23:19 crc kubenswrapper[4900]: E1202 15:23:19.968115 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362\": container with ID starting with 64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362 not found: ID does not exist" containerID="64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.968230 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362"} err="failed to get container status \"64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362\": rpc error: code = NotFound desc = could not find container \"64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362\": container with ID starting with 64dd43c038a0cd9d454485bc51101576ba47303f2aa501602577f4ecebd4d362 not found: ID does not exist" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.968341 4900 scope.go:117] "RemoveContainer" containerID="aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20" Dec 02 15:23:19 crc kubenswrapper[4900]: E1202 15:23:19.968833 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20\": container with ID starting with aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20 not found: ID does not exist" containerID="aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20" Dec 02 15:23:19 crc kubenswrapper[4900]: I1202 15:23:19.968887 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20"} err="failed to get container status \"aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20\": rpc error: code = NotFound desc = could not find container \"aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20\": container with ID starting with aed28cd3abc690f60566d6d014d7744413cca5dffe2d6780429e0229e8ccaa20 not found: ID does not exist" Dec 02 15:23:20 crc kubenswrapper[4900]: I1202 15:23:20.921729 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" path="/var/lib/kubelet/pods/e662316f-b3c6-471c-a87c-f5cc7a402917/volumes" Dec 02 15:23:21 crc kubenswrapper[4900]: I1202 15:23:21.315286 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b86d75b6f-gp8ml" Dec 02 15:23:21 crc kubenswrapper[4900]: I1202 15:23:21.405377 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d5c9bf6cf-lz9kl"] Dec 02 15:23:21 crc kubenswrapper[4900]: I1202 15:23:21.405859 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon-log" containerID="cri-o://760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661" gracePeriod=30 Dec 02 15:23:21 crc kubenswrapper[4900]: I1202 15:23:21.406052 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" containerID="cri-o://de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06" gracePeriod=30 Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.064454 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4l8fr"] Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.084747 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b13d-account-create-update-xmbc8"] Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.099554 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4l8fr"] Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.108512 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b13d-account-create-update-xmbc8"] Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.780158 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7db49f5d4c-rjsh2" event={"ID":"1e5907eb-6866-41f5-81c2-a15c5d1b7379","Type":"ContainerStarted","Data":"a29d19af66cd2998cc716af6bfa91818221a978465c1e0bfd2a8a4660e4ddccf"} Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.780459 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.783274 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5cc74477-qtk82" event={"ID":"b7b3fd2e-0a49-42c3-a44e-cb79074ab660","Type":"ContainerStarted","Data":"92849317de086565ee2f1e842397cd26d8416055236b0f9c38ffa53b0a094245"} Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.783519 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.808400 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7db49f5d4c-rjsh2" podStartSLOduration=2.133575219 podStartE2EDuration="4.808375935s" podCreationTimestamp="2025-12-02 15:23:18 +0000 UTC" firstStartedPulling="2025-12-02 15:23:19.24027051 +0000 UTC m=+6044.656084361" lastFinishedPulling="2025-12-02 15:23:21.915071226 +0000 UTC m=+6047.330885077" observedRunningTime="2025-12-02 15:23:22.801822181 +0000 UTC m=+6048.217636072" watchObservedRunningTime="2025-12-02 15:23:22.808375935 +0000 UTC m=+6048.224189806" Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.926103 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ebc034-0549-46ca-b553-739e5317d5ff" path="/var/lib/kubelet/pods/48ebc034-0549-46ca-b553-739e5317d5ff/volumes" Dec 02 15:23:22 crc kubenswrapper[4900]: I1202 15:23:22.927150 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82105e45-2a76-4957-af32-bed10436bcff" path="/var/lib/kubelet/pods/82105e45-2a76-4957-af32-bed10436bcff/volumes" Dec 02 15:23:24 crc kubenswrapper[4900]: I1202 15:23:24.802727 4900 generic.go:334] "Generic (PLEG): container finished" podID="d86072e1-840c-4704-be8a-0338ee314daa" containerID="de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06" exitCode=0 Dec 02 15:23:24 crc kubenswrapper[4900]: I1202 15:23:24.802797 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5c9bf6cf-lz9kl" event={"ID":"d86072e1-840c-4704-be8a-0338ee314daa","Type":"ContainerDied","Data":"de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06"} Dec 02 15:23:25 crc kubenswrapper[4900]: I1202 15:23:25.531945 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Dec 02 15:23:30 crc kubenswrapper[4900]: I1202 15:23:30.029010 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5cc74477-qtk82" podStartSLOduration=9.501069381 podStartE2EDuration="12.028987025s" podCreationTimestamp="2025-12-02 15:23:18 +0000 UTC" firstStartedPulling="2025-12-02 15:23:19.392184083 +0000 UTC m=+6044.807997934" lastFinishedPulling="2025-12-02 15:23:21.920101727 +0000 UTC m=+6047.335915578" observedRunningTime="2025-12-02 15:23:22.832024648 +0000 UTC m=+6048.247838519" watchObservedRunningTime="2025-12-02 15:23:30.028987025 +0000 UTC m=+6055.444800886" Dec 02 15:23:30 crc kubenswrapper[4900]: I1202 15:23:30.034932 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qj5hh"] Dec 02 15:23:30 crc kubenswrapper[4900]: I1202 15:23:30.038026 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5cc74477-qtk82" Dec 02 15:23:30 crc kubenswrapper[4900]: I1202 15:23:30.044561 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7db49f5d4c-rjsh2" Dec 02 15:23:30 crc kubenswrapper[4900]: I1202 15:23:30.046569 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qj5hh"] Dec 02 15:23:30 crc kubenswrapper[4900]: I1202 15:23:30.922001 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcbe4ce-733f-4352-9564-826b6113b4dd" path="/var/lib/kubelet/pods/1bcbe4ce-733f-4352-9564-826b6113b4dd/volumes" Dec 02 15:23:35 crc kubenswrapper[4900]: I1202 15:23:35.532240 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Dec 02 15:23:38 crc kubenswrapper[4900]: I1202 15:23:38.590608 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5fcb767956-nzh5b" Dec 02 15:23:45 crc kubenswrapper[4900]: I1202 15:23:45.532141 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d5c9bf6cf-lz9kl" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Dec 02 15:23:45 crc kubenswrapper[4900]: I1202 15:23:45.532722 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:23:51 crc kubenswrapper[4900]: I1202 15:23:51.883456 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.072809 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-config-data\") pod \"d86072e1-840c-4704-be8a-0338ee314daa\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.072886 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d86072e1-840c-4704-be8a-0338ee314daa-horizon-secret-key\") pod \"d86072e1-840c-4704-be8a-0338ee314daa\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.072996 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2qx\" (UniqueName: \"kubernetes.io/projected/d86072e1-840c-4704-be8a-0338ee314daa-kube-api-access-zg2qx\") pod \"d86072e1-840c-4704-be8a-0338ee314daa\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.073013 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-scripts\") pod \"d86072e1-840c-4704-be8a-0338ee314daa\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.073154 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86072e1-840c-4704-be8a-0338ee314daa-logs\") pod \"d86072e1-840c-4704-be8a-0338ee314daa\" (UID: \"d86072e1-840c-4704-be8a-0338ee314daa\") " Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.074504 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d86072e1-840c-4704-be8a-0338ee314daa-logs" (OuterVolumeSpecName: "logs") pod "d86072e1-840c-4704-be8a-0338ee314daa" (UID: "d86072e1-840c-4704-be8a-0338ee314daa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.078960 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86072e1-840c-4704-be8a-0338ee314daa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d86072e1-840c-4704-be8a-0338ee314daa" (UID: "d86072e1-840c-4704-be8a-0338ee314daa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.084918 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86072e1-840c-4704-be8a-0338ee314daa-kube-api-access-zg2qx" (OuterVolumeSpecName: "kube-api-access-zg2qx") pod "d86072e1-840c-4704-be8a-0338ee314daa" (UID: "d86072e1-840c-4704-be8a-0338ee314daa"). InnerVolumeSpecName "kube-api-access-zg2qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.098161 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-scripts" (OuterVolumeSpecName: "scripts") pod "d86072e1-840c-4704-be8a-0338ee314daa" (UID: "d86072e1-840c-4704-be8a-0338ee314daa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.102502 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-config-data" (OuterVolumeSpecName: "config-data") pod "d86072e1-840c-4704-be8a-0338ee314daa" (UID: "d86072e1-840c-4704-be8a-0338ee314daa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.103239 4900 generic.go:334] "Generic (PLEG): container finished" podID="d86072e1-840c-4704-be8a-0338ee314daa" containerID="760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661" exitCode=137 Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.103280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5c9bf6cf-lz9kl" event={"ID":"d86072e1-840c-4704-be8a-0338ee314daa","Type":"ContainerDied","Data":"760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661"} Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.103310 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d5c9bf6cf-lz9kl" event={"ID":"d86072e1-840c-4704-be8a-0338ee314daa","Type":"ContainerDied","Data":"e58dc4e174eba824a6370f452323eff20efd89490f42daba589c51f185ed56df"} Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.103327 4900 scope.go:117] "RemoveContainer" containerID="de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.103347 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d5c9bf6cf-lz9kl" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.175905 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86072e1-840c-4704-be8a-0338ee314daa-logs\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.175964 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.175975 4900 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d86072e1-840c-4704-be8a-0338ee314daa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.175986 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2qx\" (UniqueName: \"kubernetes.io/projected/d86072e1-840c-4704-be8a-0338ee314daa-kube-api-access-zg2qx\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.175997 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d86072e1-840c-4704-be8a-0338ee314daa-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.191577 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d5c9bf6cf-lz9kl"] Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.199663 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d5c9bf6cf-lz9kl"] Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.320968 4900 scope.go:117] "RemoveContainer" containerID="760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.338317 4900 scope.go:117] "RemoveContainer" containerID="de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06" Dec 02 15:23:52 crc kubenswrapper[4900]: E1202 15:23:52.338798 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06\": container with ID starting with de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06 not found: ID does not exist" containerID="de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.338847 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06"} err="failed to get container status \"de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06\": rpc error: code = NotFound desc = could not find container \"de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06\": container with ID starting with de80fc544bf7ab62a9f2a81c603b5d024f797bbab3e6c1c2f71bb9e5807f2f06 not found: ID does not exist" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.338879 4900 scope.go:117] "RemoveContainer" containerID="760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661" Dec 02 15:23:52 crc kubenswrapper[4900]: E1202 15:23:52.339183 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661\": container with ID starting with 760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661 not found: ID does not exist" containerID="760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.339219 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661"} err="failed to get container status \"760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661\": rpc error: code = NotFound desc = could not find container \"760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661\": container with ID starting with 760d98d2ae61dc81fd0d14c7c30148046419cf40d11b5dea45c8ac12ecd20661 not found: ID does not exist" Dec 02 15:23:52 crc kubenswrapper[4900]: I1202 15:23:52.926828 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d86072e1-840c-4704-be8a-0338ee314daa" path="/var/lib/kubelet/pods/d86072e1-840c-4704-be8a-0338ee314daa/volumes" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.137104 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg"] Dec 02 15:23:54 crc kubenswrapper[4900]: E1202 15:23:54.137830 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.137845 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" Dec 02 15:23:54 crc kubenswrapper[4900]: E1202 15:23:54.137871 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.137879 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" Dec 02 15:23:54 crc kubenswrapper[4900]: E1202 15:23:54.137899 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon-log" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.137907 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon-log" Dec 02 15:23:54 crc kubenswrapper[4900]: E1202 15:23:54.137934 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon-log" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.137943 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon-log" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.138166 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon-log" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.138181 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon-log" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.138212 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e662316f-b3c6-471c-a87c-f5cc7a402917" containerName="horizon" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.138228 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86072e1-840c-4704-be8a-0338ee314daa" containerName="horizon" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.139886 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.142545 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.159566 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg"] Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.327688 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2b7\" (UniqueName: \"kubernetes.io/projected/a46412aa-8251-4e10-ac23-23c749eeca63-kube-api-access-nj2b7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.327765 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.327948 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.430878 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.431019 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2b7\" (UniqueName: \"kubernetes.io/projected/a46412aa-8251-4e10-ac23-23c749eeca63-kube-api-access-nj2b7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.431043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.432255 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.434585 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.464025 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2b7\" (UniqueName: \"kubernetes.io/projected/a46412aa-8251-4e10-ac23-23c749eeca63-kube-api-access-nj2b7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:54 crc kubenswrapper[4900]: I1202 15:23:54.476331 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:23:55 crc kubenswrapper[4900]: I1202 15:23:55.117842 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg"] Dec 02 15:23:55 crc kubenswrapper[4900]: W1202 15:23:55.126151 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46412aa_8251_4e10_ac23_23c749eeca63.slice/crio-3b7cc03952cc58519677e086ae5abe267f1baa7a5b05377d1829d295271a280d WatchSource:0}: Error finding container 3b7cc03952cc58519677e086ae5abe267f1baa7a5b05377d1829d295271a280d: Status 404 returned error can't find the container with id 3b7cc03952cc58519677e086ae5abe267f1baa7a5b05377d1829d295271a280d Dec 02 15:23:55 crc kubenswrapper[4900]: I1202 15:23:55.147513 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" event={"ID":"a46412aa-8251-4e10-ac23-23c749eeca63","Type":"ContainerStarted","Data":"3b7cc03952cc58519677e086ae5abe267f1baa7a5b05377d1829d295271a280d"} Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.158142 4900 generic.go:334] "Generic (PLEG): container finished" podID="a46412aa-8251-4e10-ac23-23c749eeca63" containerID="cbdc4c4e68cecd38c29c99dd3c11adfebb5c5a5ae0b3d3d5e9331510e6e54251" exitCode=0 Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.158235 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" event={"ID":"a46412aa-8251-4e10-ac23-23c749eeca63","Type":"ContainerDied","Data":"cbdc4c4e68cecd38c29c99dd3c11adfebb5c5a5ae0b3d3d5e9331510e6e54251"} Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.472961 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.475717 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.502971 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.594113 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5j95\" (UniqueName: \"kubernetes.io/projected/7aa847ae-bf9f-4727-aa6d-235721900502-kube-api-access-m5j95\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.594675 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-utilities\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.594865 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-catalog-content\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.696897 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5j95\" (UniqueName: \"kubernetes.io/projected/7aa847ae-bf9f-4727-aa6d-235721900502-kube-api-access-m5j95\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.696977 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-utilities\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.697049 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-catalog-content\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.697788 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-utilities\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.697791 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-catalog-content\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.721980 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5j95\" (UniqueName: \"kubernetes.io/projected/7aa847ae-bf9f-4727-aa6d-235721900502-kube-api-access-m5j95\") pod \"redhat-operators-dww6z\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:56 crc kubenswrapper[4900]: I1202 15:23:56.798399 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:23:57 crc kubenswrapper[4900]: W1202 15:23:57.371734 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aa847ae_bf9f_4727_aa6d_235721900502.slice/crio-bda28ea6c10d02ee0cf4500e3f55892846eccb44ceacb8583e2c505cb005e166 WatchSource:0}: Error finding container bda28ea6c10d02ee0cf4500e3f55892846eccb44ceacb8583e2c505cb005e166: Status 404 returned error can't find the container with id bda28ea6c10d02ee0cf4500e3f55892846eccb44ceacb8583e2c505cb005e166 Dec 02 15:23:57 crc kubenswrapper[4900]: I1202 15:23:57.391422 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:23:58 crc kubenswrapper[4900]: I1202 15:23:58.179334 4900 generic.go:334] "Generic (PLEG): container finished" podID="a46412aa-8251-4e10-ac23-23c749eeca63" containerID="701ab7fb77d1e92bd3358128759748d5b2fa57550baf7a9f7d3c736487fd9ac3" exitCode=0 Dec 02 15:23:58 crc kubenswrapper[4900]: I1202 15:23:58.179427 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" event={"ID":"a46412aa-8251-4e10-ac23-23c749eeca63","Type":"ContainerDied","Data":"701ab7fb77d1e92bd3358128759748d5b2fa57550baf7a9f7d3c736487fd9ac3"} Dec 02 15:23:58 crc kubenswrapper[4900]: I1202 15:23:58.185935 4900 generic.go:334] "Generic (PLEG): container finished" podID="7aa847ae-bf9f-4727-aa6d-235721900502" containerID="09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9" exitCode=0 Dec 02 15:23:58 crc kubenswrapper[4900]: I1202 15:23:58.185977 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dww6z" event={"ID":"7aa847ae-bf9f-4727-aa6d-235721900502","Type":"ContainerDied","Data":"09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9"} Dec 02 15:23:58 crc kubenswrapper[4900]: I1202 15:23:58.186005 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dww6z" event={"ID":"7aa847ae-bf9f-4727-aa6d-235721900502","Type":"ContainerStarted","Data":"bda28ea6c10d02ee0cf4500e3f55892846eccb44ceacb8583e2c505cb005e166"} Dec 02 15:23:59 crc kubenswrapper[4900]: I1202 15:23:59.201200 4900 generic.go:334] "Generic (PLEG): container finished" podID="a46412aa-8251-4e10-ac23-23c749eeca63" containerID="9ad83838a0724fd6a85d8cf5aa355f737233e66d55e252df55fd0af55110c9a8" exitCode=0 Dec 02 15:23:59 crc kubenswrapper[4900]: I1202 15:23:59.201298 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" event={"ID":"a46412aa-8251-4e10-ac23-23c749eeca63","Type":"ContainerDied","Data":"9ad83838a0724fd6a85d8cf5aa355f737233e66d55e252df55fd0af55110c9a8"} Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.598518 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.686281 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj2b7\" (UniqueName: \"kubernetes.io/projected/a46412aa-8251-4e10-ac23-23c749eeca63-kube-api-access-nj2b7\") pod \"a46412aa-8251-4e10-ac23-23c749eeca63\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.686481 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-util\") pod \"a46412aa-8251-4e10-ac23-23c749eeca63\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.686599 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-bundle\") pod \"a46412aa-8251-4e10-ac23-23c749eeca63\" (UID: \"a46412aa-8251-4e10-ac23-23c749eeca63\") " Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.689068 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-bundle" (OuterVolumeSpecName: "bundle") pod "a46412aa-8251-4e10-ac23-23c749eeca63" (UID: "a46412aa-8251-4e10-ac23-23c749eeca63"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.698244 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-util" (OuterVolumeSpecName: "util") pod "a46412aa-8251-4e10-ac23-23c749eeca63" (UID: "a46412aa-8251-4e10-ac23-23c749eeca63"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.699754 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46412aa-8251-4e10-ac23-23c749eeca63-kube-api-access-nj2b7" (OuterVolumeSpecName: "kube-api-access-nj2b7") pod "a46412aa-8251-4e10-ac23-23c749eeca63" (UID: "a46412aa-8251-4e10-ac23-23c749eeca63"). InnerVolumeSpecName "kube-api-access-nj2b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.788499 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj2b7\" (UniqueName: \"kubernetes.io/projected/a46412aa-8251-4e10-ac23-23c749eeca63-kube-api-access-nj2b7\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.788532 4900 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-util\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:00 crc kubenswrapper[4900]: I1202 15:24:00.788545 4900 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a46412aa-8251-4e10-ac23-23c749eeca63-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:01 crc kubenswrapper[4900]: I1202 15:24:01.224108 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" event={"ID":"a46412aa-8251-4e10-ac23-23c749eeca63","Type":"ContainerDied","Data":"3b7cc03952cc58519677e086ae5abe267f1baa7a5b05377d1829d295271a280d"} Dec 02 15:24:01 crc kubenswrapper[4900]: I1202 15:24:01.224149 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b7cc03952cc58519677e086ae5abe267f1baa7a5b05377d1829d295271a280d" Dec 02 15:24:01 crc kubenswrapper[4900]: I1202 15:24:01.224213 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg" Dec 02 15:24:03 crc kubenswrapper[4900]: I1202 15:24:03.325430 4900 scope.go:117] "RemoveContainer" containerID="a8258420d5640e0952b59fa881c36fb46769ede6a5e160daa15f31a36458d586" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.103481 4900 scope.go:117] "RemoveContainer" containerID="41ff189566196b65f42dea204a2041e602fa90e743bf9f85de3f0b31d32811e1" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.194752 4900 scope.go:117] "RemoveContainer" containerID="8c9b389eb786ea66b0ff9aee7f2766519257029c9dbe8640949564c29dd126de" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.654624 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp"] Dec 02 15:24:10 crc kubenswrapper[4900]: E1202 15:24:10.655309 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="extract" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.655325 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="extract" Dec 02 15:24:10 crc kubenswrapper[4900]: E1202 15:24:10.655368 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="pull" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.655374 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="pull" Dec 02 15:24:10 crc kubenswrapper[4900]: E1202 15:24:10.655391 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="util" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.655397 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="util" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.655577 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46412aa-8251-4e10-ac23-23c749eeca63" containerName="extract" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.656286 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.685910 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.688037 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-gzzt6" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.688515 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.694701 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp"] Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.710795 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67cx\" (UniqueName: \"kubernetes.io/projected/4c8af68c-3b2d-44ce-86e7-5d94a1038d5f-kube-api-access-n67cx\") pod \"obo-prometheus-operator-668cf9dfbb-qv5tp\" (UID: \"4c8af68c-3b2d-44ce-86e7-5d94a1038d5f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.788864 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr"] Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.794193 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.797959 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-twzv5" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.798075 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.818743 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n67cx\" (UniqueName: \"kubernetes.io/projected/4c8af68c-3b2d-44ce-86e7-5d94a1038d5f-kube-api-access-n67cx\") pod \"obo-prometheus-operator-668cf9dfbb-qv5tp\" (UID: \"4c8af68c-3b2d-44ce-86e7-5d94a1038d5f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.844712 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr"] Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.845791 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n67cx\" (UniqueName: \"kubernetes.io/projected/4c8af68c-3b2d-44ce-86e7-5d94a1038d5f-kube-api-access-n67cx\") pod \"obo-prometheus-operator-668cf9dfbb-qv5tp\" (UID: \"4c8af68c-3b2d-44ce-86e7-5d94a1038d5f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.852954 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g"] Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.854917 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.869126 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g"] Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.930369 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18ac83f9-098d-4d3f-ab15-413671561160-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-fx89g\" (UID: \"18ac83f9-098d-4d3f-ab15-413671561160\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.930417 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18ac83f9-098d-4d3f-ab15-413671561160-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-fx89g\" (UID: \"18ac83f9-098d-4d3f-ab15-413671561160\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.930483 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5fbb4b2-ff82-49df-8e11-16aa2e348fa0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-bcmqr\" (UID: \"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.930633 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5fbb4b2-ff82-49df-8e11-16aa2e348fa0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-bcmqr\" (UID: \"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.978305 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-w8mjx"] Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.979935 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.981553 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.981875 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4bfw8" Dec 02 15:24:10 crc kubenswrapper[4900]: I1202 15:24:10.985578 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.001903 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-w8mjx"] Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.033005 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18ac83f9-098d-4d3f-ab15-413671561160-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-fx89g\" (UID: \"18ac83f9-098d-4d3f-ab15-413671561160\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.033334 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18ac83f9-098d-4d3f-ab15-413671561160-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-fx89g\" (UID: \"18ac83f9-098d-4d3f-ab15-413671561160\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.033471 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5fbb4b2-ff82-49df-8e11-16aa2e348fa0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-bcmqr\" (UID: \"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.033683 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5fbb4b2-ff82-49df-8e11-16aa2e348fa0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-bcmqr\" (UID: \"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.038350 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5fbb4b2-ff82-49df-8e11-16aa2e348fa0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-bcmqr\" (UID: \"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.049266 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5fbb4b2-ff82-49df-8e11-16aa2e348fa0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-bcmqr\" (UID: \"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.049409 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18ac83f9-098d-4d3f-ab15-413671561160-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-fx89g\" (UID: \"18ac83f9-098d-4d3f-ab15-413671561160\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.049690 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18ac83f9-098d-4d3f-ab15-413671561160-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-79f9546874-fx89g\" (UID: \"18ac83f9-098d-4d3f-ab15-413671561160\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.113520 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.135441 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/186fb8e8-d830-4dc9-8b1a-a596b1348b39-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-w8mjx\" (UID: \"186fb8e8-d830-4dc9-8b1a-a596b1348b39\") " pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.135661 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxvm\" (UniqueName: \"kubernetes.io/projected/186fb8e8-d830-4dc9-8b1a-a596b1348b39-kube-api-access-ggxvm\") pod \"observability-operator-d8bb48f5d-w8mjx\" (UID: \"186fb8e8-d830-4dc9-8b1a-a596b1348b39\") " pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.229586 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.247298 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-cghkz"] Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.248861 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.250611 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-fc4nn" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.259291 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/186fb8e8-d830-4dc9-8b1a-a596b1348b39-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-w8mjx\" (UID: \"186fb8e8-d830-4dc9-8b1a-a596b1348b39\") " pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.259400 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxvm\" (UniqueName: \"kubernetes.io/projected/186fb8e8-d830-4dc9-8b1a-a596b1348b39-kube-api-access-ggxvm\") pod \"observability-operator-d8bb48f5d-w8mjx\" (UID: \"186fb8e8-d830-4dc9-8b1a-a596b1348b39\") " pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.284521 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/186fb8e8-d830-4dc9-8b1a-a596b1348b39-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-w8mjx\" (UID: \"186fb8e8-d830-4dc9-8b1a-a596b1348b39\") " pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.303336 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxvm\" (UniqueName: \"kubernetes.io/projected/186fb8e8-d830-4dc9-8b1a-a596b1348b39-kube-api-access-ggxvm\") pod \"observability-operator-d8bb48f5d-w8mjx\" (UID: \"186fb8e8-d830-4dc9-8b1a-a596b1348b39\") " pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.305425 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-cghkz"] Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.339763 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.360886 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e0d1a5d-af3d-4e3f-add9-e617a98cd95e-openshift-service-ca\") pod \"perses-operator-5446b9c989-cghkz\" (UID: \"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e\") " pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.360982 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2zq\" (UniqueName: \"kubernetes.io/projected/7e0d1a5d-af3d-4e3f-add9-e617a98cd95e-kube-api-access-dm2zq\") pod \"perses-operator-5446b9c989-cghkz\" (UID: \"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e\") " pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.440844 4900 generic.go:334] "Generic (PLEG): container finished" podID="7aa847ae-bf9f-4727-aa6d-235721900502" containerID="683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5" exitCode=0 Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.440883 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dww6z" event={"ID":"7aa847ae-bf9f-4727-aa6d-235721900502","Type":"ContainerDied","Data":"683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5"} Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.466892 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e0d1a5d-af3d-4e3f-add9-e617a98cd95e-openshift-service-ca\") pod \"perses-operator-5446b9c989-cghkz\" (UID: \"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e\") " pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.467026 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2zq\" (UniqueName: \"kubernetes.io/projected/7e0d1a5d-af3d-4e3f-add9-e617a98cd95e-kube-api-access-dm2zq\") pod \"perses-operator-5446b9c989-cghkz\" (UID: \"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e\") " pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.467908 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e0d1a5d-af3d-4e3f-add9-e617a98cd95e-openshift-service-ca\") pod \"perses-operator-5446b9c989-cghkz\" (UID: \"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e\") " pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.527095 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2zq\" (UniqueName: \"kubernetes.io/projected/7e0d1a5d-af3d-4e3f-add9-e617a98cd95e-kube-api-access-dm2zq\") pod \"perses-operator-5446b9c989-cghkz\" (UID: \"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e\") " pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.625688 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp"] Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.688342 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:11 crc kubenswrapper[4900]: I1202 15:24:11.937215 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.046567 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d6db-account-create-update-7w9q6"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.077009 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2k57m"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.122320 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d6db-account-create-update-7w9q6"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.157269 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2k57m"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.202432 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.220845 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-w8mjx"] Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.234776 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-cghkz"] Dec 02 15:24:12 crc kubenswrapper[4900]: W1202 15:24:12.244755 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186fb8e8_d830_4dc9_8b1a_a596b1348b39.slice/crio-e32c932ba3d9941a456f492f0397f2e58b12ef9c5847111cb9ac542808bc47e6 WatchSource:0}: Error finding container e32c932ba3d9941a456f492f0397f2e58b12ef9c5847111cb9ac542808bc47e6: Status 404 returned error can't find the container with id e32c932ba3d9941a456f492f0397f2e58b12ef9c5847111cb9ac542808bc47e6 Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.468772 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" event={"ID":"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0","Type":"ContainerStarted","Data":"308ac0b8d094bc15c3514422ef8e8f00ad448ccb744260cba8a77ddf920caab6"} Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.469820 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-cghkz" event={"ID":"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e","Type":"ContainerStarted","Data":"68017e0cb314d7e50b2d3261fea05af5caf132b9185fa3cac7b0419b24abaa0f"} Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.470735 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" event={"ID":"186fb8e8-d830-4dc9-8b1a-a596b1348b39","Type":"ContainerStarted","Data":"e32c932ba3d9941a456f492f0397f2e58b12ef9c5847111cb9ac542808bc47e6"} Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.471624 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" event={"ID":"4c8af68c-3b2d-44ce-86e7-5d94a1038d5f","Type":"ContainerStarted","Data":"adf09f07ff85703b5e4f377ce80c776a96876c002919d0618191659420886e81"} Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.476777 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" event={"ID":"18ac83f9-098d-4d3f-ab15-413671561160","Type":"ContainerStarted","Data":"fe410d24d342fcdf1011a626a45bd84d65814c86779fc723d7bb56c569092273"} Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.921767 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717f6540-b7e2-4a49-9de1-72291cf6e532" path="/var/lib/kubelet/pods/717f6540-b7e2-4a49-9de1-72291cf6e532/volumes" Dec 02 15:24:12 crc kubenswrapper[4900]: I1202 15:24:12.922970 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd2e113-06ec-473e-b9d3-d064bdcf430e" path="/var/lib/kubelet/pods/ddd2e113-06ec-473e-b9d3-d064bdcf430e/volumes" Dec 02 15:24:14 crc kubenswrapper[4900]: I1202 15:24:14.512159 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dww6z" event={"ID":"7aa847ae-bf9f-4727-aa6d-235721900502","Type":"ContainerStarted","Data":"3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee"} Dec 02 15:24:14 crc kubenswrapper[4900]: I1202 15:24:14.541808 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dww6z" podStartSLOduration=3.80766156 podStartE2EDuration="18.541788757s" podCreationTimestamp="2025-12-02 15:23:56 +0000 UTC" firstStartedPulling="2025-12-02 15:23:58.203015599 +0000 UTC m=+6083.618829460" lastFinishedPulling="2025-12-02 15:24:12.937142806 +0000 UTC m=+6098.352956657" observedRunningTime="2025-12-02 15:24:14.53654955 +0000 UTC m=+6099.952363401" watchObservedRunningTime="2025-12-02 15:24:14.541788757 +0000 UTC m=+6099.957602608" Dec 02 15:24:16 crc kubenswrapper[4900]: I1202 15:24:16.798618 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:24:16 crc kubenswrapper[4900]: I1202 15:24:16.799130 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:24:17 crc kubenswrapper[4900]: I1202 15:24:17.848704 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dww6z" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="registry-server" probeResult="failure" output=< Dec 02 15:24:17 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 15:24:17 crc kubenswrapper[4900]: > Dec 02 15:24:20 crc kubenswrapper[4900]: I1202 15:24:20.047398 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-lmjzm"] Dec 02 15:24:20 crc kubenswrapper[4900]: I1202 15:24:20.056384 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-lmjzm"] Dec 02 15:24:20 crc kubenswrapper[4900]: I1202 15:24:20.923747 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2828f33-f08a-4ef5-8995-f2a5b72de227" path="/var/lib/kubelet/pods/b2828f33-f08a-4ef5-8995-f2a5b72de227/volumes" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.612418 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" event={"ID":"186fb8e8-d830-4dc9-8b1a-a596b1348b39","Type":"ContainerStarted","Data":"3ad2795ce7c9650ea396dcf55e98582a0182733ad1b1a692f0e4ced6a8e1fde4"} Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.612913 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.615144 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" event={"ID":"4c8af68c-3b2d-44ce-86e7-5d94a1038d5f","Type":"ContainerStarted","Data":"128dbe1fa249985a7372dd06324ed835897231e70f37fcaf4a0febd5f23c0475"} Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.617159 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" event={"ID":"18ac83f9-098d-4d3f-ab15-413671561160","Type":"ContainerStarted","Data":"bf4726ccee669611f9a676ef34a40696c9ba64d9bbd2d06488ff1d191e9d1e4e"} Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.619356 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" event={"ID":"e5fbb4b2-ff82-49df-8e11-16aa2e348fa0","Type":"ContainerStarted","Data":"c4dca1c8bb87c4d8da5578e53a610892cb4c110e11c516320b74150f29335b85"} Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.621275 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-cghkz" event={"ID":"7e0d1a5d-af3d-4e3f-add9-e617a98cd95e","Type":"ContainerStarted","Data":"425573fb3a6e256d831efe2b9de2990110d400015b0b9f152d2efee17c34df63"} Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.622100 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.642089 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" podStartSLOduration=3.111207314 podStartE2EDuration="12.642068587s" podCreationTimestamp="2025-12-02 15:24:10 +0000 UTC" firstStartedPulling="2025-12-02 15:24:12.247802805 +0000 UTC m=+6097.663616656" lastFinishedPulling="2025-12-02 15:24:21.778664078 +0000 UTC m=+6107.194477929" observedRunningTime="2025-12-02 15:24:22.629255506 +0000 UTC m=+6108.045069357" watchObservedRunningTime="2025-12-02 15:24:22.642068587 +0000 UTC m=+6108.057882428" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.656616 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qv5tp" podStartSLOduration=2.582298556 podStartE2EDuration="12.656598496s" podCreationTimestamp="2025-12-02 15:24:10 +0000 UTC" firstStartedPulling="2025-12-02 15:24:11.647954561 +0000 UTC m=+6097.063768412" lastFinishedPulling="2025-12-02 15:24:21.722254491 +0000 UTC m=+6107.138068352" observedRunningTime="2025-12-02 15:24:22.646285056 +0000 UTC m=+6108.062098907" watchObservedRunningTime="2025-12-02 15:24:22.656598496 +0000 UTC m=+6108.072412347" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.662530 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-w8mjx" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.673066 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-bcmqr" podStartSLOduration=2.913050009 podStartE2EDuration="12.673050448s" podCreationTimestamp="2025-12-02 15:24:10 +0000 UTC" firstStartedPulling="2025-12-02 15:24:11.931979161 +0000 UTC m=+6097.347793002" lastFinishedPulling="2025-12-02 15:24:21.6919796 +0000 UTC m=+6107.107793441" observedRunningTime="2025-12-02 15:24:22.671347111 +0000 UTC m=+6108.087160962" watchObservedRunningTime="2025-12-02 15:24:22.673050448 +0000 UTC m=+6108.088864299" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.723391 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-79f9546874-fx89g" podStartSLOduration=3.221111776 podStartE2EDuration="12.723366644s" podCreationTimestamp="2025-12-02 15:24:10 +0000 UTC" firstStartedPulling="2025-12-02 15:24:12.219361125 +0000 UTC m=+6097.635174976" lastFinishedPulling="2025-12-02 15:24:21.721615983 +0000 UTC m=+6107.137429844" observedRunningTime="2025-12-02 15:24:22.701808057 +0000 UTC m=+6108.117621908" watchObservedRunningTime="2025-12-02 15:24:22.723366644 +0000 UTC m=+6108.139180505" Dec 02 15:24:22 crc kubenswrapper[4900]: I1202 15:24:22.751661 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-cghkz" podStartSLOduration=2.311636822 podStartE2EDuration="11.751621049s" podCreationTimestamp="2025-12-02 15:24:11 +0000 UTC" firstStartedPulling="2025-12-02 15:24:12.263558258 +0000 UTC m=+6097.679372109" lastFinishedPulling="2025-12-02 15:24:21.703542465 +0000 UTC m=+6107.119356336" observedRunningTime="2025-12-02 15:24:22.743170921 +0000 UTC m=+6108.158984782" watchObservedRunningTime="2025-12-02 15:24:22.751621049 +0000 UTC m=+6108.167434900" Dec 02 15:24:26 crc kubenswrapper[4900]: I1202 15:24:26.853589 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:24:26 crc kubenswrapper[4900]: I1202 15:24:26.907046 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:24:27 crc kubenswrapper[4900]: I1202 15:24:27.286818 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:24:27 crc kubenswrapper[4900]: I1202 15:24:27.462418 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6hx4"] Dec 02 15:24:27 crc kubenswrapper[4900]: I1202 15:24:27.469989 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x6hx4" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="registry-server" containerID="cri-o://e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8" gracePeriod=2 Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.449615 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.465331 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-utilities\") pod \"5f06d1fb-7275-44f3-867d-4178c35c0952\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.465494 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csbd8\" (UniqueName: \"kubernetes.io/projected/5f06d1fb-7275-44f3-867d-4178c35c0952-kube-api-access-csbd8\") pod \"5f06d1fb-7275-44f3-867d-4178c35c0952\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.465562 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-catalog-content\") pod \"5f06d1fb-7275-44f3-867d-4178c35c0952\" (UID: \"5f06d1fb-7275-44f3-867d-4178c35c0952\") " Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.467555 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-utilities" (OuterVolumeSpecName: "utilities") pod "5f06d1fb-7275-44f3-867d-4178c35c0952" (UID: "5f06d1fb-7275-44f3-867d-4178c35c0952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.473540 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f06d1fb-7275-44f3-867d-4178c35c0952-kube-api-access-csbd8" (OuterVolumeSpecName: "kube-api-access-csbd8") pod "5f06d1fb-7275-44f3-867d-4178c35c0952" (UID: "5f06d1fb-7275-44f3-867d-4178c35c0952"). InnerVolumeSpecName "kube-api-access-csbd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.567924 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csbd8\" (UniqueName: \"kubernetes.io/projected/5f06d1fb-7275-44f3-867d-4178c35c0952-kube-api-access-csbd8\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.567967 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.601693 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f06d1fb-7275-44f3-867d-4178c35c0952" (UID: "5f06d1fb-7275-44f3-867d-4178c35c0952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.669876 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f06d1fb-7275-44f3-867d-4178c35c0952-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.685258 4900 generic.go:334] "Generic (PLEG): container finished" podID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerID="e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8" exitCode=0 Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.685349 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6hx4" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.685341 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerDied","Data":"e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8"} Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.685415 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6hx4" event={"ID":"5f06d1fb-7275-44f3-867d-4178c35c0952","Type":"ContainerDied","Data":"0cb2ff769360cbce4bf4e1ff6fc6913ac3c42c4610710255909aeb71db979055"} Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.685443 4900 scope.go:117] "RemoveContainer" containerID="e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.712890 4900 scope.go:117] "RemoveContainer" containerID="5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.725356 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6hx4"] Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.735389 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x6hx4"] Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.749316 4900 scope.go:117] "RemoveContainer" containerID="ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.790901 4900 scope.go:117] "RemoveContainer" containerID="e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8" Dec 02 15:24:28 crc kubenswrapper[4900]: E1202 15:24:28.791425 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8\": container with ID starting with e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8 not found: ID does not exist" containerID="e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.791473 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8"} err="failed to get container status \"e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8\": rpc error: code = NotFound desc = could not find container \"e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8\": container with ID starting with e573c287c5bbb984306d2d15f85a5300081a0cec6c23a6dad494acabf74d85e8 not found: ID does not exist" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.791501 4900 scope.go:117] "RemoveContainer" containerID="5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3" Dec 02 15:24:28 crc kubenswrapper[4900]: E1202 15:24:28.791875 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3\": container with ID starting with 5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3 not found: ID does not exist" containerID="5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.791896 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3"} err="failed to get container status \"5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3\": rpc error: code = NotFound desc = could not find container \"5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3\": container with ID starting with 5ae78ce1d13530b8bbb1217fa7088ca811f28df26f57e6481fb1bd1061b838d3 not found: ID does not exist" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.791909 4900 scope.go:117] "RemoveContainer" containerID="ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd" Dec 02 15:24:28 crc kubenswrapper[4900]: E1202 15:24:28.792182 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd\": container with ID starting with ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd not found: ID does not exist" containerID="ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.792212 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd"} err="failed to get container status \"ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd\": rpc error: code = NotFound desc = could not find container \"ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd\": container with ID starting with ee724de7a0bec1db42c0de642d12e2deff05d4ddb431d4e4ec3e76b8747369bd not found: ID does not exist" Dec 02 15:24:28 crc kubenswrapper[4900]: I1202 15:24:28.924741 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" path="/var/lib/kubelet/pods/5f06d1fb-7275-44f3-867d-4178c35c0952/volumes" Dec 02 15:24:31 crc kubenswrapper[4900]: I1202 15:24:31.692032 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-cghkz" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.116058 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.116863 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" containerName="openstackclient" containerID="cri-o://3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a" gracePeriod=2 Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.129570 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.150068 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 15:24:36 crc kubenswrapper[4900]: E1202 15:24:36.150562 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="registry-server" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.150587 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="registry-server" Dec 02 15:24:36 crc kubenswrapper[4900]: E1202 15:24:36.150613 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="extract-utilities" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.150621 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="extract-utilities" Dec 02 15:24:36 crc kubenswrapper[4900]: E1202 15:24:36.150682 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="extract-content" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.150690 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="extract-content" Dec 02 15:24:36 crc kubenswrapper[4900]: E1202 15:24:36.150710 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" containerName="openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.150719 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" containerName="openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.151000 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f06d1fb-7275-44f3-867d-4178c35c0952" containerName="registry-server" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.151033 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" containerName="openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.151811 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.157080 4900 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="beee5dba-052e-4430-9e85-79b478eabf6d" podUID="05b43015-c93a-4b6b-9997-49dc52a8d84c" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.166546 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.253039 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/05b43015-c93a-4b6b-9997-49dc52a8d84c-openstack-config\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.253174 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqv8\" (UniqueName: \"kubernetes.io/projected/05b43015-c93a-4b6b-9997-49dc52a8d84c-kube-api-access-ksqv8\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.253346 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/05b43015-c93a-4b6b-9997-49dc52a8d84c-openstack-config-secret\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.325493 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.326903 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.334366 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tsg7b" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.356559 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/05b43015-c93a-4b6b-9997-49dc52a8d84c-openstack-config-secret\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.357756 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/05b43015-c93a-4b6b-9997-49dc52a8d84c-openstack-config\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.363393 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.364112 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqv8\" (UniqueName: \"kubernetes.io/projected/05b43015-c93a-4b6b-9997-49dc52a8d84c-kube-api-access-ksqv8\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.370620 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/05b43015-c93a-4b6b-9997-49dc52a8d84c-openstack-config-secret\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.372844 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/05b43015-c93a-4b6b-9997-49dc52a8d84c-openstack-config\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.421983 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqv8\" (UniqueName: \"kubernetes.io/projected/05b43015-c93a-4b6b-9997-49dc52a8d84c-kube-api-access-ksqv8\") pod \"openstackclient\" (UID: \"05b43015-c93a-4b6b-9997-49dc52a8d84c\") " pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.474445 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.477632 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcnw\" (UniqueName: \"kubernetes.io/projected/f608b559-0bd1-439d-b404-022ecd8de49f-kube-api-access-lfcnw\") pod \"kube-state-metrics-0\" (UID: \"f608b559-0bd1-439d-b404-022ecd8de49f\") " pod="openstack/kube-state-metrics-0" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.579896 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcnw\" (UniqueName: \"kubernetes.io/projected/f608b559-0bd1-439d-b404-022ecd8de49f-kube-api-access-lfcnw\") pod \"kube-state-metrics-0\" (UID: \"f608b559-0bd1-439d-b404-022ecd8de49f\") " pod="openstack/kube-state-metrics-0" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.622460 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcnw\" (UniqueName: \"kubernetes.io/projected/f608b559-0bd1-439d-b404-022ecd8de49f-kube-api-access-lfcnw\") pod \"kube-state-metrics-0\" (UID: \"f608b559-0bd1-439d-b404-022ecd8de49f\") " pod="openstack/kube-state-metrics-0" Dec 02 15:24:36 crc kubenswrapper[4900]: I1202 15:24:36.668286 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.067434 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.070254 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.088998 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.089235 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.089269 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-d7shj" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.089369 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.090893 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.104905 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.212437 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c939e201-5539-4e52-a39a-758328ae3f19-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.212505 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.212586 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.236920 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c939e201-5539-4e52-a39a-758328ae3f19-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.237057 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.237250 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9xn\" (UniqueName: \"kubernetes.io/projected/c939e201-5539-4e52-a39a-758328ae3f19-kube-api-access-4j9xn\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.237425 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c939e201-5539-4e52-a39a-758328ae3f19-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.338942 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.339009 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c939e201-5539-4e52-a39a-758328ae3f19-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.339050 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.339121 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j9xn\" (UniqueName: \"kubernetes.io/projected/c939e201-5539-4e52-a39a-758328ae3f19-kube-api-access-4j9xn\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.339180 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c939e201-5539-4e52-a39a-758328ae3f19-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.339206 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c939e201-5539-4e52-a39a-758328ae3f19-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.339227 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.346746 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c939e201-5539-4e52-a39a-758328ae3f19-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.349938 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c939e201-5539-4e52-a39a-758328ae3f19-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.349665 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c939e201-5539-4e52-a39a-758328ae3f19-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.352726 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.355374 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.355495 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c939e201-5539-4e52-a39a-758328ae3f19-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.385751 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j9xn\" (UniqueName: \"kubernetes.io/projected/c939e201-5539-4e52-a39a-758328ae3f19-kube-api-access-4j9xn\") pod \"alertmanager-metric-storage-0\" (UID: \"c939e201-5539-4e52-a39a-758328ae3f19\") " pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.414498 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.435053 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.679915 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.695369 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.698288 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.701057 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.701118 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.701303 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.701443 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wdxt5" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.701545 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.702893 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.704605 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861343 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861693 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861737 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861763 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861777 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndr4k\" (UniqueName: \"kubernetes.io/projected/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-kube-api-access-ndr4k\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861820 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861859 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-config\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.861928 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.937942 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f608b559-0bd1-439d-b404-022ecd8de49f","Type":"ContainerStarted","Data":"13c69435b4b36839ad41f929b7db5e92f150e4ded102c3e11d5ec09e954d977f"} Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.941722 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"05b43015-c93a-4b6b-9997-49dc52a8d84c","Type":"ContainerStarted","Data":"44cac9e2e8c9efd0a44d36c7bfd607fbb396da39aaabdaab0225fe0126a0c738"} Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.941774 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"05b43015-c93a-4b6b-9997-49dc52a8d84c","Type":"ContainerStarted","Data":"9d8f43da4eedc8c0d9cd1b90dee68c0912ebdebf26b3c4beb1d89eb61202bbac"} Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.963380 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.963454 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.963480 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndr4k\" (UniqueName: \"kubernetes.io/projected/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-kube-api-access-ndr4k\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.963524 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.964763 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.964851 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-config\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.965017 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.965336 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.965409 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.966048 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.966034396 podStartE2EDuration="1.966034396s" podCreationTimestamp="2025-12-02 15:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:24:37.95695711 +0000 UTC m=+6123.372770971" watchObservedRunningTime="2025-12-02 15:24:37.966034396 +0000 UTC m=+6123.381848247" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.971993 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.983128 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.989256 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndr4k\" (UniqueName: \"kubernetes.io/projected/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-kube-api-access-ndr4k\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.991320 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.991458 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.993484 4900 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.993521 4900 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc6fa47118ad32f0f148edf4668bf06f747caad731e79b5d1d8e7b76d30037c0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:37 crc kubenswrapper[4900]: I1202 15:24:37.996289 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/35b0b67a-fdd7-47a5-8ee1-4a179bffaa84-config\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.101537 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0455546-2d3e-4a39-baf2-115b87aa06a6\") pod \"prometheus-metric-storage-0\" (UID: \"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84\") " pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.230364 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.357375 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.561474 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.691233 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config\") pod \"beee5dba-052e-4430-9e85-79b478eabf6d\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.691346 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config-secret\") pod \"beee5dba-052e-4430-9e85-79b478eabf6d\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.691545 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnssc\" (UniqueName: \"kubernetes.io/projected/beee5dba-052e-4430-9e85-79b478eabf6d-kube-api-access-rnssc\") pod \"beee5dba-052e-4430-9e85-79b478eabf6d\" (UID: \"beee5dba-052e-4430-9e85-79b478eabf6d\") " Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.701800 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beee5dba-052e-4430-9e85-79b478eabf6d-kube-api-access-rnssc" (OuterVolumeSpecName: "kube-api-access-rnssc") pod "beee5dba-052e-4430-9e85-79b478eabf6d" (UID: "beee5dba-052e-4430-9e85-79b478eabf6d"). InnerVolumeSpecName "kube-api-access-rnssc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.736603 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "beee5dba-052e-4430-9e85-79b478eabf6d" (UID: "beee5dba-052e-4430-9e85-79b478eabf6d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.795267 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnssc\" (UniqueName: \"kubernetes.io/projected/beee5dba-052e-4430-9e85-79b478eabf6d-kube-api-access-rnssc\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.795326 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.811819 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "beee5dba-052e-4430-9e85-79b478eabf6d" (UID: "beee5dba-052e-4430-9e85-79b478eabf6d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.897703 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/beee5dba-052e-4430-9e85-79b478eabf6d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.921379 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beee5dba-052e-4430-9e85-79b478eabf6d" path="/var/lib/kubelet/pods/beee5dba-052e-4430-9e85-79b478eabf6d/volumes" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.951407 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f608b559-0bd1-439d-b404-022ecd8de49f","Type":"ContainerStarted","Data":"52b15a7f705913a3749de97f380af0285c5f13274b99d3a69613d2c773d378d0"} Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.951528 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.952670 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c939e201-5539-4e52-a39a-758328ae3f19","Type":"ContainerStarted","Data":"295c49db2f8c22f582902745af51630381d99c2378a0276ee797309b7abfbcc7"} Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.954511 4900 generic.go:334] "Generic (PLEG): container finished" podID="beee5dba-052e-4430-9e85-79b478eabf6d" containerID="3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a" exitCode=137 Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.954557 4900 scope.go:117] "RemoveContainer" containerID="3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.954565 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.978329 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.559971994 podStartE2EDuration="2.978310082s" podCreationTimestamp="2025-12-02 15:24:36 +0000 UTC" firstStartedPulling="2025-12-02 15:24:37.68073568 +0000 UTC m=+6123.096549531" lastFinishedPulling="2025-12-02 15:24:38.099073768 +0000 UTC m=+6123.514887619" observedRunningTime="2025-12-02 15:24:38.967027735 +0000 UTC m=+6124.382841576" watchObservedRunningTime="2025-12-02 15:24:38.978310082 +0000 UTC m=+6124.394123923" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.987940 4900 scope.go:117] "RemoveContainer" containerID="3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a" Dec 02 15:24:38 crc kubenswrapper[4900]: E1202 15:24:38.990851 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a\": container with ID starting with 3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a not found: ID does not exist" containerID="3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a" Dec 02 15:24:38 crc kubenswrapper[4900]: I1202 15:24:38.990901 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a"} err="failed to get container status \"3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a\": rpc error: code = NotFound desc = could not find container \"3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a\": container with ID starting with 3eab10f70ab2f88a5634b1f0937825cc8feb3d927ce5893c474163364e1b196a not found: ID does not exist" Dec 02 15:24:39 crc kubenswrapper[4900]: I1202 15:24:39.033866 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 02 15:24:39 crc kubenswrapper[4900]: I1202 15:24:39.967409 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84","Type":"ContainerStarted","Data":"52b60080c3e6bf41f69b59182276971f0745ec41f0c6d6ff0900e065553c0220"} Dec 02 15:24:45 crc kubenswrapper[4900]: I1202 15:24:45.020245 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c939e201-5539-4e52-a39a-758328ae3f19","Type":"ContainerStarted","Data":"47bc01a11c90975165f578bb5d6d632914e96247f6c252dae01e8ad721261f2f"} Dec 02 15:24:45 crc kubenswrapper[4900]: I1202 15:24:45.023263 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84","Type":"ContainerStarted","Data":"759c568dc54e460c519cae8517fe24f3b65f7adcdcd3cc31a4077414a231c184"} Dec 02 15:24:46 crc kubenswrapper[4900]: I1202 15:24:46.674148 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 15:24:51 crc kubenswrapper[4900]: I1202 15:24:51.042539 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85a6-account-create-update-h527x"] Dec 02 15:24:51 crc kubenswrapper[4900]: I1202 15:24:51.056822 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c95jb"] Dec 02 15:24:51 crc kubenswrapper[4900]: I1202 15:24:51.069513 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c95jb"] Dec 02 15:24:51 crc kubenswrapper[4900]: I1202 15:24:51.079502 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85a6-account-create-update-h527x"] Dec 02 15:24:51 crc kubenswrapper[4900]: I1202 15:24:51.097379 4900 generic.go:334] "Generic (PLEG): container finished" podID="35b0b67a-fdd7-47a5-8ee1-4a179bffaa84" containerID="759c568dc54e460c519cae8517fe24f3b65f7adcdcd3cc31a4077414a231c184" exitCode=0 Dec 02 15:24:51 crc kubenswrapper[4900]: I1202 15:24:51.097422 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84","Type":"ContainerDied","Data":"759c568dc54e460c519cae8517fe24f3b65f7adcdcd3cc31a4077414a231c184"} Dec 02 15:24:52 crc kubenswrapper[4900]: I1202 15:24:52.107588 4900 generic.go:334] "Generic (PLEG): container finished" podID="c939e201-5539-4e52-a39a-758328ae3f19" containerID="47bc01a11c90975165f578bb5d6d632914e96247f6c252dae01e8ad721261f2f" exitCode=0 Dec 02 15:24:52 crc kubenswrapper[4900]: I1202 15:24:52.108042 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c939e201-5539-4e52-a39a-758328ae3f19","Type":"ContainerDied","Data":"47bc01a11c90975165f578bb5d6d632914e96247f6c252dae01e8ad721261f2f"} Dec 02 15:24:52 crc kubenswrapper[4900]: I1202 15:24:52.925376 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65818e2d-d982-4fe9-b4c7-8a4e77f7ee42" path="/var/lib/kubelet/pods/65818e2d-d982-4fe9-b4c7-8a4e77f7ee42/volumes" Dec 02 15:24:52 crc kubenswrapper[4900]: I1202 15:24:52.925968 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737c1bfd-4cb5-41a8-aa81-fce3a6f84f12" path="/var/lib/kubelet/pods/737c1bfd-4cb5-41a8-aa81-fce3a6f84f12/volumes" Dec 02 15:24:56 crc kubenswrapper[4900]: I1202 15:24:56.033568 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6h5mj"] Dec 02 15:24:56 crc kubenswrapper[4900]: I1202 15:24:56.043510 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6h5mj"] Dec 02 15:24:56 crc kubenswrapper[4900]: I1202 15:24:56.922441 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7754303c-9e8d-4345-bc03-92010f30c25b" path="/var/lib/kubelet/pods/7754303c-9e8d-4345-bc03-92010f30c25b/volumes" Dec 02 15:25:00 crc kubenswrapper[4900]: I1202 15:25:00.219139 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c939e201-5539-4e52-a39a-758328ae3f19","Type":"ContainerStarted","Data":"8df96cd616c3651409b928b82a102e8bc753776cd60f0db49e4cd95f720fdf9f"} Dec 02 15:25:00 crc kubenswrapper[4900]: I1202 15:25:00.223959 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84","Type":"ContainerStarted","Data":"63dd3452cdcaaa770e60c281f197bf7cb990a5b66a382f1c8e1a1bfeff14be8e"} Dec 02 15:25:04 crc kubenswrapper[4900]: I1202 15:25:04.273799 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c939e201-5539-4e52-a39a-758328ae3f19","Type":"ContainerStarted","Data":"ddddad29302ccd998de9c905d73ace028355c45c8c9808e9d8cb189b0b4c3ff6"} Dec 02 15:25:04 crc kubenswrapper[4900]: I1202 15:25:04.274453 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 02 15:25:04 crc kubenswrapper[4900]: I1202 15:25:04.278283 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 02 15:25:04 crc kubenswrapper[4900]: I1202 15:25:04.280353 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84","Type":"ContainerStarted","Data":"a7de42df1ff8717428631d8f207e808246cb525acf9856ae6df2a9885cea434d"} Dec 02 15:25:04 crc kubenswrapper[4900]: I1202 15:25:04.307792 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.359342185 podStartE2EDuration="27.307770937s" podCreationTimestamp="2025-12-02 15:24:37 +0000 UTC" firstStartedPulling="2025-12-02 15:24:38.28327969 +0000 UTC m=+6123.699093541" lastFinishedPulling="2025-12-02 15:24:59.231708442 +0000 UTC m=+6144.647522293" observedRunningTime="2025-12-02 15:25:04.297173389 +0000 UTC m=+6149.712987280" watchObservedRunningTime="2025-12-02 15:25:04.307770937 +0000 UTC m=+6149.723584788" Dec 02 15:25:08 crc kubenswrapper[4900]: I1202 15:25:08.323248 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"35b0b67a-fdd7-47a5-8ee1-4a179bffaa84","Type":"ContainerStarted","Data":"e1135510d663a7d23a7b1b5958219fa751217af0ab27d1c2b7a19c5fc08456ab"} Dec 02 15:25:08 crc kubenswrapper[4900]: I1202 15:25:08.357553 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 02 15:25:08 crc kubenswrapper[4900]: I1202 15:25:08.357589 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 02 15:25:08 crc kubenswrapper[4900]: I1202 15:25:08.361438 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 02 15:25:08 crc kubenswrapper[4900]: I1202 15:25:08.364143 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.956612063 podStartE2EDuration="32.364124437s" podCreationTimestamp="2025-12-02 15:24:36 +0000 UTC" firstStartedPulling="2025-12-02 15:24:39.047529109 +0000 UTC m=+6124.463342960" lastFinishedPulling="2025-12-02 15:25:07.455041483 +0000 UTC m=+6152.870855334" observedRunningTime="2025-12-02 15:25:08.349229048 +0000 UTC m=+6153.765042899" watchObservedRunningTime="2025-12-02 15:25:08.364124437 +0000 UTC m=+6153.779938288" Dec 02 15:25:09 crc kubenswrapper[4900]: I1202 15:25:09.335298 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 02 15:25:10 crc kubenswrapper[4900]: I1202 15:25:10.370533 4900 scope.go:117] "RemoveContainer" containerID="94827944a2d4e0fc86dcf15b4764fbaaeff20a8cb0f99fe4c7a197840a65415c" Dec 02 15:25:10 crc kubenswrapper[4900]: I1202 15:25:10.436293 4900 scope.go:117] "RemoveContainer" containerID="45bef62447ad90fa0db77a7ca66581344708e0aaac6c07b9901e935c50ec3c67" Dec 02 15:25:10 crc kubenswrapper[4900]: I1202 15:25:10.468978 4900 scope.go:117] "RemoveContainer" containerID="7a5af63b513fa6dd091b7e151bfe5c79adeb61d4e3cec885e729cd187fd5a197" Dec 02 15:25:10 crc kubenswrapper[4900]: I1202 15:25:10.529325 4900 scope.go:117] "RemoveContainer" containerID="a73fa48de9dcb5e9d9560fa3680ecc43ef00e9b40f21297638e08d06c32d6511" Dec 02 15:25:10 crc kubenswrapper[4900]: I1202 15:25:10.565299 4900 scope.go:117] "RemoveContainer" containerID="9e1bec05f01bc4938aee8f24c9b2dd2a4204fb250992230c6aefd1ee7c25b194" Dec 02 15:25:10 crc kubenswrapper[4900]: I1202 15:25:10.621139 4900 scope.go:117] "RemoveContainer" containerID="a49c768e61802a505f088beb662353870f987dd9c09ace4e018efe29cb99917d" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.259017 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.266579 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.269169 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.269362 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.277827 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431565 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431672 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cff7q\" (UniqueName: \"kubernetes.io/projected/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-kube-api-access-cff7q\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431704 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431724 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-config-data\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431747 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-scripts\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431797 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-log-httpd\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.431815 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-run-httpd\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533358 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-log-httpd\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533428 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-run-httpd\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533499 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533586 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cff7q\" (UniqueName: \"kubernetes.io/projected/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-kube-api-access-cff7q\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533630 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533665 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-config-data\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.533700 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-scripts\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.534712 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-log-httpd\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.534972 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-run-httpd\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.542023 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.545350 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-config-data\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.549147 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.557023 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-scripts\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.561983 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cff7q\" (UniqueName: \"kubernetes.io/projected/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-kube-api-access-cff7q\") pod \"ceilometer-0\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " pod="openstack/ceilometer-0" Dec 02 15:25:11 crc kubenswrapper[4900]: I1202 15:25:11.594688 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:25:13 crc kubenswrapper[4900]: I1202 15:25:13.355937 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:13 crc kubenswrapper[4900]: W1202 15:25:13.356742 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec85f04e_8c8c_442a_b4b1_6d26c20f9d6f.slice/crio-4fa0901095c8c7c576feedcd2840eb9e00127d65d78f12d246006cd5796cfe9d WatchSource:0}: Error finding container 4fa0901095c8c7c576feedcd2840eb9e00127d65d78f12d246006cd5796cfe9d: Status 404 returned error can't find the container with id 4fa0901095c8c7c576feedcd2840eb9e00127d65d78f12d246006cd5796cfe9d Dec 02 15:25:13 crc kubenswrapper[4900]: I1202 15:25:13.380316 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerStarted","Data":"4fa0901095c8c7c576feedcd2840eb9e00127d65d78f12d246006cd5796cfe9d"} Dec 02 15:25:14 crc kubenswrapper[4900]: I1202 15:25:14.395348 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerStarted","Data":"50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d"} Dec 02 15:25:15 crc kubenswrapper[4900]: I1202 15:25:15.117268 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:25:15 crc kubenswrapper[4900]: I1202 15:25:15.117344 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:25:15 crc kubenswrapper[4900]: I1202 15:25:15.440491 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerStarted","Data":"48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096"} Dec 02 15:25:16 crc kubenswrapper[4900]: I1202 15:25:16.454325 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerStarted","Data":"b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708"} Dec 02 15:25:19 crc kubenswrapper[4900]: I1202 15:25:19.488125 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerStarted","Data":"687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5"} Dec 02 15:25:19 crc kubenswrapper[4900]: I1202 15:25:19.488735 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 15:25:19 crc kubenswrapper[4900]: I1202 15:25:19.511165 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.820053907 podStartE2EDuration="8.511147345s" podCreationTimestamp="2025-12-02 15:25:11 +0000 UTC" firstStartedPulling="2025-12-02 15:25:13.361523568 +0000 UTC m=+6158.777337429" lastFinishedPulling="2025-12-02 15:25:19.052617016 +0000 UTC m=+6164.468430867" observedRunningTime="2025-12-02 15:25:19.503928422 +0000 UTC m=+6164.919742273" watchObservedRunningTime="2025-12-02 15:25:19.511147345 +0000 UTC m=+6164.926961196" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.693609 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-wp7kl"] Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.695931 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.705022 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-wp7kl"] Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.768543 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x87lm\" (UniqueName: \"kubernetes.io/projected/966744da-bd39-4cc3-8fc1-b8e2b66f1499-kube-api-access-x87lm\") pod \"aodh-db-create-wp7kl\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.769036 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966744da-bd39-4cc3-8fc1-b8e2b66f1499-operator-scripts\") pod \"aodh-db-create-wp7kl\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.787608 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2ff6-account-create-update-sldkc"] Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.789407 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.791408 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.807010 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2ff6-account-create-update-sldkc"] Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.870851 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966744da-bd39-4cc3-8fc1-b8e2b66f1499-operator-scripts\") pod \"aodh-db-create-wp7kl\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.871046 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/894c30d1-7f8c-4440-bc50-6531ede27372-operator-scripts\") pod \"aodh-2ff6-account-create-update-sldkc\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.871116 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x87lm\" (UniqueName: \"kubernetes.io/projected/966744da-bd39-4cc3-8fc1-b8e2b66f1499-kube-api-access-x87lm\") pod \"aodh-db-create-wp7kl\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.871166 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbv9h\" (UniqueName: \"kubernetes.io/projected/894c30d1-7f8c-4440-bc50-6531ede27372-kube-api-access-kbv9h\") pod \"aodh-2ff6-account-create-update-sldkc\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.872051 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966744da-bd39-4cc3-8fc1-b8e2b66f1499-operator-scripts\") pod \"aodh-db-create-wp7kl\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.890485 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x87lm\" (UniqueName: \"kubernetes.io/projected/966744da-bd39-4cc3-8fc1-b8e2b66f1499-kube-api-access-x87lm\") pod \"aodh-db-create-wp7kl\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.973753 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/894c30d1-7f8c-4440-bc50-6531ede27372-operator-scripts\") pod \"aodh-2ff6-account-create-update-sldkc\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.974466 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/894c30d1-7f8c-4440-bc50-6531ede27372-operator-scripts\") pod \"aodh-2ff6-account-create-update-sldkc\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.974573 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbv9h\" (UniqueName: \"kubernetes.io/projected/894c30d1-7f8c-4440-bc50-6531ede27372-kube-api-access-kbv9h\") pod \"aodh-2ff6-account-create-update-sldkc\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:22 crc kubenswrapper[4900]: I1202 15:25:22.992895 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbv9h\" (UniqueName: \"kubernetes.io/projected/894c30d1-7f8c-4440-bc50-6531ede27372-kube-api-access-kbv9h\") pod \"aodh-2ff6-account-create-update-sldkc\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:23 crc kubenswrapper[4900]: I1202 15:25:23.015653 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:23 crc kubenswrapper[4900]: I1202 15:25:23.109057 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:23 crc kubenswrapper[4900]: W1202 15:25:23.537571 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966744da_bd39_4cc3_8fc1_b8e2b66f1499.slice/crio-99e1667fd4d1c97ab02f35d4d25b34d0fcf92cda3506fae9806989ff18ddfe8c WatchSource:0}: Error finding container 99e1667fd4d1c97ab02f35d4d25b34d0fcf92cda3506fae9806989ff18ddfe8c: Status 404 returned error can't find the container with id 99e1667fd4d1c97ab02f35d4d25b34d0fcf92cda3506fae9806989ff18ddfe8c Dec 02 15:25:23 crc kubenswrapper[4900]: I1202 15:25:23.537679 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-wp7kl"] Dec 02 15:25:23 crc kubenswrapper[4900]: I1202 15:25:23.549259 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wp7kl" event={"ID":"966744da-bd39-4cc3-8fc1-b8e2b66f1499","Type":"ContainerStarted","Data":"99e1667fd4d1c97ab02f35d4d25b34d0fcf92cda3506fae9806989ff18ddfe8c"} Dec 02 15:25:23 crc kubenswrapper[4900]: I1202 15:25:23.651409 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2ff6-account-create-update-sldkc"] Dec 02 15:25:23 crc kubenswrapper[4900]: W1202 15:25:23.652549 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894c30d1_7f8c_4440_bc50_6531ede27372.slice/crio-b60174e8d90b1a440c4894b932f3083d91a052a2806206a61ca203868abf5e8c WatchSource:0}: Error finding container b60174e8d90b1a440c4894b932f3083d91a052a2806206a61ca203868abf5e8c: Status 404 returned error can't find the container with id b60174e8d90b1a440c4894b932f3083d91a052a2806206a61ca203868abf5e8c Dec 02 15:25:24 crc kubenswrapper[4900]: I1202 15:25:24.558582 4900 generic.go:334] "Generic (PLEG): container finished" podID="894c30d1-7f8c-4440-bc50-6531ede27372" containerID="a77848aaf977bab0eb60a22b2edb155097f917dc5b900e8f794b3ace5422c553" exitCode=0 Dec 02 15:25:24 crc kubenswrapper[4900]: I1202 15:25:24.558903 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2ff6-account-create-update-sldkc" event={"ID":"894c30d1-7f8c-4440-bc50-6531ede27372","Type":"ContainerDied","Data":"a77848aaf977bab0eb60a22b2edb155097f917dc5b900e8f794b3ace5422c553"} Dec 02 15:25:24 crc kubenswrapper[4900]: I1202 15:25:24.558927 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2ff6-account-create-update-sldkc" event={"ID":"894c30d1-7f8c-4440-bc50-6531ede27372","Type":"ContainerStarted","Data":"b60174e8d90b1a440c4894b932f3083d91a052a2806206a61ca203868abf5e8c"} Dec 02 15:25:24 crc kubenswrapper[4900]: I1202 15:25:24.563043 4900 generic.go:334] "Generic (PLEG): container finished" podID="966744da-bd39-4cc3-8fc1-b8e2b66f1499" containerID="85fdb82d7586149cbc8b595e3665b41dca9053f6febe467b2ce309ed1f76ff37" exitCode=0 Dec 02 15:25:24 crc kubenswrapper[4900]: I1202 15:25:24.563076 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wp7kl" event={"ID":"966744da-bd39-4cc3-8fc1-b8e2b66f1499","Type":"ContainerDied","Data":"85fdb82d7586149cbc8b595e3665b41dca9053f6febe467b2ce309ed1f76ff37"} Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.133541 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.140444 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.246810 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbv9h\" (UniqueName: \"kubernetes.io/projected/894c30d1-7f8c-4440-bc50-6531ede27372-kube-api-access-kbv9h\") pod \"894c30d1-7f8c-4440-bc50-6531ede27372\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.246849 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x87lm\" (UniqueName: \"kubernetes.io/projected/966744da-bd39-4cc3-8fc1-b8e2b66f1499-kube-api-access-x87lm\") pod \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.247013 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/894c30d1-7f8c-4440-bc50-6531ede27372-operator-scripts\") pod \"894c30d1-7f8c-4440-bc50-6531ede27372\" (UID: \"894c30d1-7f8c-4440-bc50-6531ede27372\") " Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.247142 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966744da-bd39-4cc3-8fc1-b8e2b66f1499-operator-scripts\") pod \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\" (UID: \"966744da-bd39-4cc3-8fc1-b8e2b66f1499\") " Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.247564 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/894c30d1-7f8c-4440-bc50-6531ede27372-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "894c30d1-7f8c-4440-bc50-6531ede27372" (UID: "894c30d1-7f8c-4440-bc50-6531ede27372"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.247603 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966744da-bd39-4cc3-8fc1-b8e2b66f1499-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "966744da-bd39-4cc3-8fc1-b8e2b66f1499" (UID: "966744da-bd39-4cc3-8fc1-b8e2b66f1499"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.253758 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894c30d1-7f8c-4440-bc50-6531ede27372-kube-api-access-kbv9h" (OuterVolumeSpecName: "kube-api-access-kbv9h") pod "894c30d1-7f8c-4440-bc50-6531ede27372" (UID: "894c30d1-7f8c-4440-bc50-6531ede27372"). InnerVolumeSpecName "kube-api-access-kbv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.254242 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966744da-bd39-4cc3-8fc1-b8e2b66f1499-kube-api-access-x87lm" (OuterVolumeSpecName: "kube-api-access-x87lm") pod "966744da-bd39-4cc3-8fc1-b8e2b66f1499" (UID: "966744da-bd39-4cc3-8fc1-b8e2b66f1499"). InnerVolumeSpecName "kube-api-access-x87lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.349681 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966744da-bd39-4cc3-8fc1-b8e2b66f1499-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.349731 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x87lm\" (UniqueName: \"kubernetes.io/projected/966744da-bd39-4cc3-8fc1-b8e2b66f1499-kube-api-access-x87lm\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.349753 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbv9h\" (UniqueName: \"kubernetes.io/projected/894c30d1-7f8c-4440-bc50-6531ede27372-kube-api-access-kbv9h\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.349770 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/894c30d1-7f8c-4440-bc50-6531ede27372-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.586400 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ff6-account-create-update-sldkc" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.586437 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2ff6-account-create-update-sldkc" event={"ID":"894c30d1-7f8c-4440-bc50-6531ede27372","Type":"ContainerDied","Data":"b60174e8d90b1a440c4894b932f3083d91a052a2806206a61ca203868abf5e8c"} Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.586944 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60174e8d90b1a440c4894b932f3083d91a052a2806206a61ca203868abf5e8c" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.588303 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-wp7kl" event={"ID":"966744da-bd39-4cc3-8fc1-b8e2b66f1499","Type":"ContainerDied","Data":"99e1667fd4d1c97ab02f35d4d25b34d0fcf92cda3506fae9806989ff18ddfe8c"} Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.588360 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e1667fd4d1c97ab02f35d4d25b34d0fcf92cda3506fae9806989ff18ddfe8c" Dec 02 15:25:26 crc kubenswrapper[4900]: I1202 15:25:26.588383 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-wp7kl" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.067796 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-wrwr2"] Dec 02 15:25:28 crc kubenswrapper[4900]: E1202 15:25:28.068549 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894c30d1-7f8c-4440-bc50-6531ede27372" containerName="mariadb-account-create-update" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.068563 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="894c30d1-7f8c-4440-bc50-6531ede27372" containerName="mariadb-account-create-update" Dec 02 15:25:28 crc kubenswrapper[4900]: E1202 15:25:28.068595 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966744da-bd39-4cc3-8fc1-b8e2b66f1499" containerName="mariadb-database-create" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.068604 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="966744da-bd39-4cc3-8fc1-b8e2b66f1499" containerName="mariadb-database-create" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.068948 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="894c30d1-7f8c-4440-bc50-6531ede27372" containerName="mariadb-account-create-update" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.068970 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="966744da-bd39-4cc3-8fc1-b8e2b66f1499" containerName="mariadb-database-create" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.069916 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.072590 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.072601 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-dln9m" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.073627 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.075368 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.077404 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wrwr2"] Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.201714 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-scripts\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.201774 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-config-data\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.201810 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgtq\" (UniqueName: \"kubernetes.io/projected/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-kube-api-access-rwgtq\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.201884 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-combined-ca-bundle\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.303324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-scripts\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.303383 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-config-data\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.303416 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwgtq\" (UniqueName: \"kubernetes.io/projected/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-kube-api-access-rwgtq\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.303497 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-combined-ca-bundle\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.309033 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-combined-ca-bundle\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.309446 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-config-data\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.320861 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-scripts\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.321786 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwgtq\" (UniqueName: \"kubernetes.io/projected/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-kube-api-access-rwgtq\") pod \"aodh-db-sync-wrwr2\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.396918 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:28 crc kubenswrapper[4900]: I1202 15:25:28.940781 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wrwr2"] Dec 02 15:25:28 crc kubenswrapper[4900]: W1202 15:25:28.952584 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf12d29_7078_4a8c_b1d9_ac6b1d331056.slice/crio-680ca068bc46a52328b4a8ffefdfbd2d5c0285ecd3298aed69bba2a1429c13e6 WatchSource:0}: Error finding container 680ca068bc46a52328b4a8ffefdfbd2d5c0285ecd3298aed69bba2a1429c13e6: Status 404 returned error can't find the container with id 680ca068bc46a52328b4a8ffefdfbd2d5c0285ecd3298aed69bba2a1429c13e6 Dec 02 15:25:29 crc kubenswrapper[4900]: I1202 15:25:29.648859 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wrwr2" event={"ID":"bdf12d29-7078-4a8c-b1d9-ac6b1d331056","Type":"ContainerStarted","Data":"680ca068bc46a52328b4a8ffefdfbd2d5c0285ecd3298aed69bba2a1429c13e6"} Dec 02 15:25:35 crc kubenswrapper[4900]: I1202 15:25:35.494006 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 15:25:36 crc kubenswrapper[4900]: I1202 15:25:36.723623 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wrwr2" event={"ID":"bdf12d29-7078-4a8c-b1d9-ac6b1d331056","Type":"ContainerStarted","Data":"215909d9f8ab7fb787239e3fdcfe70742129381a322b3856e9f6cfe254821da7"} Dec 02 15:25:36 crc kubenswrapper[4900]: I1202 15:25:36.747138 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-wrwr2" podStartSLOduration=2.210906431 podStartE2EDuration="8.747118661s" podCreationTimestamp="2025-12-02 15:25:28 +0000 UTC" firstStartedPulling="2025-12-02 15:25:28.955557836 +0000 UTC m=+6174.371371687" lastFinishedPulling="2025-12-02 15:25:35.491770066 +0000 UTC m=+6180.907583917" observedRunningTime="2025-12-02 15:25:36.737796239 +0000 UTC m=+6182.153610110" watchObservedRunningTime="2025-12-02 15:25:36.747118661 +0000 UTC m=+6182.162932512" Dec 02 15:25:38 crc kubenswrapper[4900]: I1202 15:25:38.764280 4900 generic.go:334] "Generic (PLEG): container finished" podID="bdf12d29-7078-4a8c-b1d9-ac6b1d331056" containerID="215909d9f8ab7fb787239e3fdcfe70742129381a322b3856e9f6cfe254821da7" exitCode=0 Dec 02 15:25:38 crc kubenswrapper[4900]: I1202 15:25:38.764675 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wrwr2" event={"ID":"bdf12d29-7078-4a8c-b1d9-ac6b1d331056","Type":"ContainerDied","Data":"215909d9f8ab7fb787239e3fdcfe70742129381a322b3856e9f6cfe254821da7"} Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.159541 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.247699 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-combined-ca-bundle\") pod \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.247908 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-scripts\") pod \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.248005 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwgtq\" (UniqueName: \"kubernetes.io/projected/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-kube-api-access-rwgtq\") pod \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.248045 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-config-data\") pod \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\" (UID: \"bdf12d29-7078-4a8c-b1d9-ac6b1d331056\") " Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.253975 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-kube-api-access-rwgtq" (OuterVolumeSpecName: "kube-api-access-rwgtq") pod "bdf12d29-7078-4a8c-b1d9-ac6b1d331056" (UID: "bdf12d29-7078-4a8c-b1d9-ac6b1d331056"). InnerVolumeSpecName "kube-api-access-rwgtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.253988 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-scripts" (OuterVolumeSpecName: "scripts") pod "bdf12d29-7078-4a8c-b1d9-ac6b1d331056" (UID: "bdf12d29-7078-4a8c-b1d9-ac6b1d331056"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.282508 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdf12d29-7078-4a8c-b1d9-ac6b1d331056" (UID: "bdf12d29-7078-4a8c-b1d9-ac6b1d331056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.283519 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-config-data" (OuterVolumeSpecName: "config-data") pod "bdf12d29-7078-4a8c-b1d9-ac6b1d331056" (UID: "bdf12d29-7078-4a8c-b1d9-ac6b1d331056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.349918 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.349945 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwgtq\" (UniqueName: \"kubernetes.io/projected/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-kube-api-access-rwgtq\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.349954 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.349967 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf12d29-7078-4a8c-b1d9-ac6b1d331056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.810947 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wrwr2" event={"ID":"bdf12d29-7078-4a8c-b1d9-ac6b1d331056","Type":"ContainerDied","Data":"680ca068bc46a52328b4a8ffefdfbd2d5c0285ecd3298aed69bba2a1429c13e6"} Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.811030 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="680ca068bc46a52328b4a8ffefdfbd2d5c0285ecd3298aed69bba2a1429c13e6" Dec 02 15:25:40 crc kubenswrapper[4900]: I1202 15:25:40.810993 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wrwr2" Dec 02 15:25:41 crc kubenswrapper[4900]: I1202 15:25:41.608449 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.739452 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 02 15:25:42 crc kubenswrapper[4900]: E1202 15:25:42.740887 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf12d29-7078-4a8c-b1d9-ac6b1d331056" containerName="aodh-db-sync" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.740908 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf12d29-7078-4a8c-b1d9-ac6b1d331056" containerName="aodh-db-sync" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.741504 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf12d29-7078-4a8c-b1d9-ac6b1d331056" containerName="aodh-db-sync" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.743404 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.753729 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.753924 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-dln9m" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.754761 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.764935 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.801829 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-scripts\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.801931 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.801965 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-config-data\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.801981 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c98l9\" (UniqueName: \"kubernetes.io/projected/9822e03f-8de7-4b82-8703-d7b868859bc1-kube-api-access-c98l9\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.904260 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-scripts\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.904363 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.904389 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-config-data\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.904407 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c98l9\" (UniqueName: \"kubernetes.io/projected/9822e03f-8de7-4b82-8703-d7b868859bc1-kube-api-access-c98l9\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.910565 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-config-data\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.912116 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-scripts\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.913175 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9822e03f-8de7-4b82-8703-d7b868859bc1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:42 crc kubenswrapper[4900]: I1202 15:25:42.934135 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c98l9\" (UniqueName: \"kubernetes.io/projected/9822e03f-8de7-4b82-8703-d7b868859bc1-kube-api-access-c98l9\") pod \"aodh-0\" (UID: \"9822e03f-8de7-4b82-8703-d7b868859bc1\") " pod="openstack/aodh-0" Dec 02 15:25:43 crc kubenswrapper[4900]: I1202 15:25:43.080021 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 02 15:25:43 crc kubenswrapper[4900]: I1202 15:25:43.775959 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 02 15:25:43 crc kubenswrapper[4900]: I1202 15:25:43.840314 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9822e03f-8de7-4b82-8703-d7b868859bc1","Type":"ContainerStarted","Data":"382de12de942441111fc3b6ad64caa09a399d0b164ea6e7f27df04eaccca8b53"} Dec 02 15:25:44 crc kubenswrapper[4900]: I1202 15:25:44.850776 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9822e03f-8de7-4b82-8703-d7b868859bc1","Type":"ContainerStarted","Data":"3ed35ef32a5dd729743a7b18ca3b8b43974cc43442f801405d984c4ed182fc3a"} Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.017070 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.017384 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-central-agent" containerID="cri-o://50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d" gracePeriod=30 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.017489 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="proxy-httpd" containerID="cri-o://687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5" gracePeriod=30 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.017531 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="sg-core" containerID="cri-o://b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708" gracePeriod=30 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.017563 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-notification-agent" containerID="cri-o://48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096" gracePeriod=30 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.117014 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.117099 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.866000 4900 generic.go:334] "Generic (PLEG): container finished" podID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerID="687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5" exitCode=0 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.866343 4900 generic.go:334] "Generic (PLEG): container finished" podID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerID="b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708" exitCode=2 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.866353 4900 generic.go:334] "Generic (PLEG): container finished" podID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerID="50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d" exitCode=0 Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.866076 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerDied","Data":"687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5"} Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.866391 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerDied","Data":"b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708"} Dec 02 15:25:45 crc kubenswrapper[4900]: I1202 15:25:45.866410 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerDied","Data":"50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d"} Dec 02 15:25:46 crc kubenswrapper[4900]: I1202 15:25:46.884162 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9822e03f-8de7-4b82-8703-d7b868859bc1","Type":"ContainerStarted","Data":"f520f47c99dbc05a8de7adbe2979e832256cdc371518e9183c600611d58bd980"} Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.671845 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.727769 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-config-data\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.728377 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-combined-ca-bundle\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.728591 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cff7q\" (UniqueName: \"kubernetes.io/projected/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-kube-api-access-cff7q\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.728626 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-sg-core-conf-yaml\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.728677 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-run-httpd\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.728753 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-scripts\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.728813 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-log-httpd\") pod \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\" (UID: \"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f\") " Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.729812 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.731789 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.735998 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-scripts" (OuterVolumeSpecName: "scripts") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.751046 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-kube-api-access-cff7q" (OuterVolumeSpecName: "kube-api-access-cff7q") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "kube-api-access-cff7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.783843 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.825250 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.832333 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.832366 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.832378 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cff7q\" (UniqueName: \"kubernetes.io/projected/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-kube-api-access-cff7q\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.832389 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.832401 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.832409 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.843877 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-config-data" (OuterVolumeSpecName: "config-data") pod "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" (UID: "ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.915910 4900 generic.go:334] "Generic (PLEG): container finished" podID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerID="48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096" exitCode=0 Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.915986 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.940269 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.949413 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9822e03f-8de7-4b82-8703-d7b868859bc1","Type":"ContainerStarted","Data":"b29c9a320a878217d059bca083856e48072110343b9c7913c56a6e55d7c549f7"} Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.949565 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerDied","Data":"48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096"} Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.949617 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f","Type":"ContainerDied","Data":"4fa0901095c8c7c576feedcd2840eb9e00127d65d78f12d246006cd5796cfe9d"} Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.949711 4900 scope.go:117] "RemoveContainer" containerID="687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.982911 4900 scope.go:117] "RemoveContainer" containerID="b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708" Dec 02 15:25:48 crc kubenswrapper[4900]: I1202 15:25:48.993419 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.011713 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.025588 4900 scope.go:117] "RemoveContainer" containerID="48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.025783 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.026291 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-notification-agent" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026313 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-notification-agent" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.026352 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-central-agent" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026362 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-central-agent" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.026374 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="sg-core" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026381 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="sg-core" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.026413 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="proxy-httpd" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026421 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="proxy-httpd" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026679 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="sg-core" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026706 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-notification-agent" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026734 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="proxy-httpd" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.026752 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" containerName="ceilometer-central-agent" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.037976 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.038043 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.039954 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.040160 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044501 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-scripts\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044530 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044694 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-config-data\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044780 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044800 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-run-httpd\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044879 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvrfp\" (UniqueName: \"kubernetes.io/projected/7ec112f4-de32-4bfa-87be-2e86d404c4e8-kube-api-access-dvrfp\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.044929 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-log-httpd\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.060790 4900 scope.go:117] "RemoveContainer" containerID="50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.082235 4900 scope.go:117] "RemoveContainer" containerID="687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.082776 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5\": container with ID starting with 687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5 not found: ID does not exist" containerID="687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.082827 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5"} err="failed to get container status \"687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5\": rpc error: code = NotFound desc = could not find container \"687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5\": container with ID starting with 687671a64178ed9c19b77580a1dae57fdb742f44e54e3dcb8384bfc3557c49b5 not found: ID does not exist" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.083154 4900 scope.go:117] "RemoveContainer" containerID="b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.083684 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708\": container with ID starting with b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708 not found: ID does not exist" containerID="b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.083725 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708"} err="failed to get container status \"b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708\": rpc error: code = NotFound desc = could not find container \"b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708\": container with ID starting with b6ec9b474da0d441b5e4e29691fe4f76dbdc518bcda7e2ce5513095640295708 not found: ID does not exist" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.083752 4900 scope.go:117] "RemoveContainer" containerID="48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.090713 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096\": container with ID starting with 48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096 not found: ID does not exist" containerID="48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.090764 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096"} err="failed to get container status \"48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096\": rpc error: code = NotFound desc = could not find container \"48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096\": container with ID starting with 48d9d3de6674d788fa03a04b89fb712482c7141f090d92956226a03eeec3c096 not found: ID does not exist" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.090791 4900 scope.go:117] "RemoveContainer" containerID="50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d" Dec 02 15:25:49 crc kubenswrapper[4900]: E1202 15:25:49.092151 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d\": container with ID starting with 50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d not found: ID does not exist" containerID="50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.092174 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d"} err="failed to get container status \"50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d\": rpc error: code = NotFound desc = could not find container \"50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d\": container with ID starting with 50ef3266d451538375e502a214f3f08a393b69998139d98ec3bf18e7bf64429d not found: ID does not exist" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.146742 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvrfp\" (UniqueName: \"kubernetes.io/projected/7ec112f4-de32-4bfa-87be-2e86d404c4e8-kube-api-access-dvrfp\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.146830 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-log-httpd\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.146916 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-scripts\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.146950 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.147111 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-config-data\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.147178 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.147214 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-run-httpd\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.147453 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-log-httpd\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.147847 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-run-httpd\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.152321 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.152845 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-config-data\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.152851 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-scripts\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.153704 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.177948 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvrfp\" (UniqueName: \"kubernetes.io/projected/7ec112f4-de32-4bfa-87be-2e86d404c4e8-kube-api-access-dvrfp\") pod \"ceilometer-0\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.365357 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:25:49 crc kubenswrapper[4900]: I1202 15:25:49.908793 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:25:50 crc kubenswrapper[4900]: I1202 15:25:50.932085 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f" path="/var/lib/kubelet/pods/ec85f04e-8c8c-442a-b4b1-6d26c20f9d6f/volumes" Dec 02 15:25:50 crc kubenswrapper[4900]: I1202 15:25:50.951775 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerStarted","Data":"0373b9d07f37a915171a733edf4044fcbcc6d387d05b2a7f00c91d0cb81dc9ae"} Dec 02 15:25:51 crc kubenswrapper[4900]: I1202 15:25:51.963557 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerStarted","Data":"9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c"} Dec 02 15:25:51 crc kubenswrapper[4900]: I1202 15:25:51.968062 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9822e03f-8de7-4b82-8703-d7b868859bc1","Type":"ContainerStarted","Data":"65d502a4c76b59dde086ae309930111f3667d875095c66664d3acb960e632460"} Dec 02 15:25:51 crc kubenswrapper[4900]: I1202 15:25:51.996474 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.992355038 podStartE2EDuration="9.996451491s" podCreationTimestamp="2025-12-02 15:25:42 +0000 UTC" firstStartedPulling="2025-12-02 15:25:43.785394045 +0000 UTC m=+6189.201207896" lastFinishedPulling="2025-12-02 15:25:50.789490508 +0000 UTC m=+6196.205304349" observedRunningTime="2025-12-02 15:25:51.992988604 +0000 UTC m=+6197.408802475" watchObservedRunningTime="2025-12-02 15:25:51.996451491 +0000 UTC m=+6197.412265352" Dec 02 15:25:54 crc kubenswrapper[4900]: I1202 15:25:54.002910 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerStarted","Data":"591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1"} Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.022580 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerStarted","Data":"1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720"} Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.069044 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1530-account-create-update-jmbtq"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.087838 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-35fd-account-create-update-kbfxd"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.097208 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-276c9"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.107816 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-12e0-account-create-update-5cz74"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.118860 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7r5wn"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.128876 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mfpnk"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.142027 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-276c9"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.155940 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-35fd-account-create-update-kbfxd"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.169256 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-12e0-account-create-update-5cz74"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.181970 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7r5wn"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.195503 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1530-account-create-update-jmbtq"] Dec 02 15:25:55 crc kubenswrapper[4900]: I1202 15:25:55.210632 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mfpnk"] Dec 02 15:25:56 crc kubenswrapper[4900]: I1202 15:25:56.921687 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256bfb3b-0b62-4c85-b687-8f84d248c1a4" path="/var/lib/kubelet/pods/256bfb3b-0b62-4c85-b687-8f84d248c1a4/volumes" Dec 02 15:25:56 crc kubenswrapper[4900]: I1202 15:25:56.923814 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ed1538-6965-4e3b-900c-f1d66ae74adb" path="/var/lib/kubelet/pods/32ed1538-6965-4e3b-900c-f1d66ae74adb/volumes" Dec 02 15:25:56 crc kubenswrapper[4900]: I1202 15:25:56.924329 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6db4ab-d069-45ed-b01c-37508a8e2a78" path="/var/lib/kubelet/pods/6a6db4ab-d069-45ed-b01c-37508a8e2a78/volumes" Dec 02 15:25:56 crc kubenswrapper[4900]: I1202 15:25:56.925230 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b289267-ed9d-42c8-8aa7-ff762dc944b2" path="/var/lib/kubelet/pods/6b289267-ed9d-42c8-8aa7-ff762dc944b2/volumes" Dec 02 15:25:56 crc kubenswrapper[4900]: I1202 15:25:56.926288 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da5713f-20df-4ec0-b8c2-857404cd476e" path="/var/lib/kubelet/pods/7da5713f-20df-4ec0-b8c2-857404cd476e/volumes" Dec 02 15:25:56 crc kubenswrapper[4900]: I1202 15:25:56.926933 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05fdeb9-4372-4614-8d2e-437a6467dea2" path="/var/lib/kubelet/pods/e05fdeb9-4372-4614-8d2e-437a6467dea2/volumes" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.049483 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerStarted","Data":"7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97"} Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.049808 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.086375 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.864085865 podStartE2EDuration="9.086358405s" podCreationTimestamp="2025-12-02 15:25:48 +0000 UTC" firstStartedPulling="2025-12-02 15:25:49.940534175 +0000 UTC m=+6195.356348026" lastFinishedPulling="2025-12-02 15:25:56.162806715 +0000 UTC m=+6201.578620566" observedRunningTime="2025-12-02 15:25:57.071082625 +0000 UTC m=+6202.486896506" watchObservedRunningTime="2025-12-02 15:25:57.086358405 +0000 UTC m=+6202.502172256" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.409482 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-jz9pz"] Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.414537 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.423605 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jz9pz"] Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.570715 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0897af7-7309-43b3-b1ff-f3097209a5eb-operator-scripts\") pod \"manila-db-create-jz9pz\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.570936 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7tm\" (UniqueName: \"kubernetes.io/projected/c0897af7-7309-43b3-b1ff-f3097209a5eb-kube-api-access-5b7tm\") pod \"manila-db-create-jz9pz\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.619788 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-93a5-account-create-update-486c7"] Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.621196 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.623174 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.630840 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-93a5-account-create-update-486c7"] Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.672156 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7tm\" (UniqueName: \"kubernetes.io/projected/c0897af7-7309-43b3-b1ff-f3097209a5eb-kube-api-access-5b7tm\") pod \"manila-db-create-jz9pz\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.672241 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0897af7-7309-43b3-b1ff-f3097209a5eb-operator-scripts\") pod \"manila-db-create-jz9pz\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.673086 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0897af7-7309-43b3-b1ff-f3097209a5eb-operator-scripts\") pod \"manila-db-create-jz9pz\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.692070 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7tm\" (UniqueName: \"kubernetes.io/projected/c0897af7-7309-43b3-b1ff-f3097209a5eb-kube-api-access-5b7tm\") pod \"manila-db-create-jz9pz\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.736531 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jz9pz" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.774413 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcwl9\" (UniqueName: \"kubernetes.io/projected/3886a21d-e294-4785-96b5-349d2ac2806e-kube-api-access-zcwl9\") pod \"manila-93a5-account-create-update-486c7\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.774609 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3886a21d-e294-4785-96b5-349d2ac2806e-operator-scripts\") pod \"manila-93a5-account-create-update-486c7\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.875902 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcwl9\" (UniqueName: \"kubernetes.io/projected/3886a21d-e294-4785-96b5-349d2ac2806e-kube-api-access-zcwl9\") pod \"manila-93a5-account-create-update-486c7\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.876521 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3886a21d-e294-4785-96b5-349d2ac2806e-operator-scripts\") pod \"manila-93a5-account-create-update-486c7\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.877222 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3886a21d-e294-4785-96b5-349d2ac2806e-operator-scripts\") pod \"manila-93a5-account-create-update-486c7\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.895915 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcwl9\" (UniqueName: \"kubernetes.io/projected/3886a21d-e294-4785-96b5-349d2ac2806e-kube-api-access-zcwl9\") pod \"manila-93a5-account-create-update-486c7\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:57 crc kubenswrapper[4900]: I1202 15:25:57.948704 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:25:58 crc kubenswrapper[4900]: I1202 15:25:58.320523 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jz9pz"] Dec 02 15:25:58 crc kubenswrapper[4900]: I1202 15:25:58.493723 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-93a5-account-create-update-486c7"] Dec 02 15:25:58 crc kubenswrapper[4900]: W1202 15:25:58.502027 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3886a21d_e294_4785_96b5_349d2ac2806e.slice/crio-4429d491a93ff6b8694f79cdd6e705e91cb45fa799b8d24f57493fe40ad121a7 WatchSource:0}: Error finding container 4429d491a93ff6b8694f79cdd6e705e91cb45fa799b8d24f57493fe40ad121a7: Status 404 returned error can't find the container with id 4429d491a93ff6b8694f79cdd6e705e91cb45fa799b8d24f57493fe40ad121a7 Dec 02 15:25:59 crc kubenswrapper[4900]: I1202 15:25:59.077519 4900 generic.go:334] "Generic (PLEG): container finished" podID="c0897af7-7309-43b3-b1ff-f3097209a5eb" containerID="e69c7a7eb83a480f66909df76151c9a7e58b11695542e7d60a494f99bd4c22bd" exitCode=0 Dec 02 15:25:59 crc kubenswrapper[4900]: I1202 15:25:59.077572 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jz9pz" event={"ID":"c0897af7-7309-43b3-b1ff-f3097209a5eb","Type":"ContainerDied","Data":"e69c7a7eb83a480f66909df76151c9a7e58b11695542e7d60a494f99bd4c22bd"} Dec 02 15:25:59 crc kubenswrapper[4900]: I1202 15:25:59.080843 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jz9pz" event={"ID":"c0897af7-7309-43b3-b1ff-f3097209a5eb","Type":"ContainerStarted","Data":"62a3c76cea0291b74b6f354e568fc9cf5de4e50f4755733ee84f68cd72f732b9"} Dec 02 15:25:59 crc kubenswrapper[4900]: I1202 15:25:59.091718 4900 generic.go:334] "Generic (PLEG): container finished" podID="3886a21d-e294-4785-96b5-349d2ac2806e" containerID="e4194303e41fa9711b8777e540eb46bfaa738962f2c95061c242bfa838bbe5cf" exitCode=0 Dec 02 15:25:59 crc kubenswrapper[4900]: I1202 15:25:59.091795 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-93a5-account-create-update-486c7" event={"ID":"3886a21d-e294-4785-96b5-349d2ac2806e","Type":"ContainerDied","Data":"e4194303e41fa9711b8777e540eb46bfaa738962f2c95061c242bfa838bbe5cf"} Dec 02 15:25:59 crc kubenswrapper[4900]: I1202 15:25:59.091828 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-93a5-account-create-update-486c7" event={"ID":"3886a21d-e294-4785-96b5-349d2ac2806e","Type":"ContainerStarted","Data":"4429d491a93ff6b8694f79cdd6e705e91cb45fa799b8d24f57493fe40ad121a7"} Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.502279 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.638725 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3886a21d-e294-4785-96b5-349d2ac2806e-operator-scripts\") pod \"3886a21d-e294-4785-96b5-349d2ac2806e\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.638894 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcwl9\" (UniqueName: \"kubernetes.io/projected/3886a21d-e294-4785-96b5-349d2ac2806e-kube-api-access-zcwl9\") pod \"3886a21d-e294-4785-96b5-349d2ac2806e\" (UID: \"3886a21d-e294-4785-96b5-349d2ac2806e\") " Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.639609 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3886a21d-e294-4785-96b5-349d2ac2806e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3886a21d-e294-4785-96b5-349d2ac2806e" (UID: "3886a21d-e294-4785-96b5-349d2ac2806e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.640992 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3886a21d-e294-4785-96b5-349d2ac2806e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.645349 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3886a21d-e294-4785-96b5-349d2ac2806e-kube-api-access-zcwl9" (OuterVolumeSpecName: "kube-api-access-zcwl9") pod "3886a21d-e294-4785-96b5-349d2ac2806e" (UID: "3886a21d-e294-4785-96b5-349d2ac2806e"). InnerVolumeSpecName "kube-api-access-zcwl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.709821 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jz9pz" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.743423 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcwl9\" (UniqueName: \"kubernetes.io/projected/3886a21d-e294-4785-96b5-349d2ac2806e-kube-api-access-zcwl9\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.845281 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0897af7-7309-43b3-b1ff-f3097209a5eb-operator-scripts\") pod \"c0897af7-7309-43b3-b1ff-f3097209a5eb\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.845428 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b7tm\" (UniqueName: \"kubernetes.io/projected/c0897af7-7309-43b3-b1ff-f3097209a5eb-kube-api-access-5b7tm\") pod \"c0897af7-7309-43b3-b1ff-f3097209a5eb\" (UID: \"c0897af7-7309-43b3-b1ff-f3097209a5eb\") " Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.845803 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0897af7-7309-43b3-b1ff-f3097209a5eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0897af7-7309-43b3-b1ff-f3097209a5eb" (UID: "c0897af7-7309-43b3-b1ff-f3097209a5eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.846336 4900 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0897af7-7309-43b3-b1ff-f3097209a5eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.849221 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0897af7-7309-43b3-b1ff-f3097209a5eb-kube-api-access-5b7tm" (OuterVolumeSpecName: "kube-api-access-5b7tm") pod "c0897af7-7309-43b3-b1ff-f3097209a5eb" (UID: "c0897af7-7309-43b3-b1ff-f3097209a5eb"). InnerVolumeSpecName "kube-api-access-5b7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:26:00 crc kubenswrapper[4900]: I1202 15:26:00.949284 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b7tm\" (UniqueName: \"kubernetes.io/projected/c0897af7-7309-43b3-b1ff-f3097209a5eb-kube-api-access-5b7tm\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:01 crc kubenswrapper[4900]: I1202 15:26:01.124772 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jz9pz" event={"ID":"c0897af7-7309-43b3-b1ff-f3097209a5eb","Type":"ContainerDied","Data":"62a3c76cea0291b74b6f354e568fc9cf5de4e50f4755733ee84f68cd72f732b9"} Dec 02 15:26:01 crc kubenswrapper[4900]: I1202 15:26:01.124836 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a3c76cea0291b74b6f354e568fc9cf5de4e50f4755733ee84f68cd72f732b9" Dec 02 15:26:01 crc kubenswrapper[4900]: I1202 15:26:01.124794 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jz9pz" Dec 02 15:26:01 crc kubenswrapper[4900]: I1202 15:26:01.127575 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-93a5-account-create-update-486c7" event={"ID":"3886a21d-e294-4785-96b5-349d2ac2806e","Type":"ContainerDied","Data":"4429d491a93ff6b8694f79cdd6e705e91cb45fa799b8d24f57493fe40ad121a7"} Dec 02 15:26:01 crc kubenswrapper[4900]: I1202 15:26:01.127609 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4429d491a93ff6b8694f79cdd6e705e91cb45fa799b8d24f57493fe40ad121a7" Dec 02 15:26:01 crc kubenswrapper[4900]: I1202 15:26:01.127675 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-93a5-account-create-update-486c7" Dec 02 15:26:02 crc kubenswrapper[4900]: I1202 15:26:02.999456 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-4622t"] Dec 02 15:26:03 crc kubenswrapper[4900]: E1202 15:26:03.001419 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0897af7-7309-43b3-b1ff-f3097209a5eb" containerName="mariadb-database-create" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.007288 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0897af7-7309-43b3-b1ff-f3097209a5eb" containerName="mariadb-database-create" Dec 02 15:26:03 crc kubenswrapper[4900]: E1202 15:26:03.007375 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3886a21d-e294-4785-96b5-349d2ac2806e" containerName="mariadb-account-create-update" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.007384 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3886a21d-e294-4785-96b5-349d2ac2806e" containerName="mariadb-account-create-update" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.007892 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0897af7-7309-43b3-b1ff-f3097209a5eb" containerName="mariadb-database-create" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.007962 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3886a21d-e294-4785-96b5-349d2ac2806e" containerName="mariadb-account-create-update" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.008842 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.010635 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.011584 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-gvpq6" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.013426 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-4622t"] Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.104437 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-config-data\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.104742 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-job-config-data\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.105012 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-combined-ca-bundle\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.105097 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbpxj\" (UniqueName: \"kubernetes.io/projected/719b71e7-a7e7-4348-8ef3-b5a3594791e7-kube-api-access-rbpxj\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.206555 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-config-data\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.206622 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-job-config-data\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.206737 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-combined-ca-bundle\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.206759 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbpxj\" (UniqueName: \"kubernetes.io/projected/719b71e7-a7e7-4348-8ef3-b5a3594791e7-kube-api-access-rbpxj\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.212636 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-job-config-data\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.212843 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-config-data\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.217218 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-combined-ca-bundle\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.230066 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbpxj\" (UniqueName: \"kubernetes.io/projected/719b71e7-a7e7-4348-8ef3-b5a3594791e7-kube-api-access-rbpxj\") pod \"manila-db-sync-4622t\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " pod="openstack/manila-db-sync-4622t" Dec 02 15:26:03 crc kubenswrapper[4900]: I1202 15:26:03.327676 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-4622t" Dec 02 15:26:04 crc kubenswrapper[4900]: W1202 15:26:04.013858 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719b71e7_a7e7_4348_8ef3_b5a3594791e7.slice/crio-7777dc94cb150cb46497856cf64a7163b83311777efdc1580c3930a0e2bd6788 WatchSource:0}: Error finding container 7777dc94cb150cb46497856cf64a7163b83311777efdc1580c3930a0e2bd6788: Status 404 returned error can't find the container with id 7777dc94cb150cb46497856cf64a7163b83311777efdc1580c3930a0e2bd6788 Dec 02 15:26:04 crc kubenswrapper[4900]: I1202 15:26:04.015594 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-4622t"] Dec 02 15:26:04 crc kubenswrapper[4900]: I1202 15:26:04.069323 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cn752"] Dec 02 15:26:04 crc kubenswrapper[4900]: I1202 15:26:04.083751 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cn752"] Dec 02 15:26:04 crc kubenswrapper[4900]: I1202 15:26:04.168780 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-4622t" event={"ID":"719b71e7-a7e7-4348-8ef3-b5a3594791e7","Type":"ContainerStarted","Data":"7777dc94cb150cb46497856cf64a7163b83311777efdc1580c3930a0e2bd6788"} Dec 02 15:26:04 crc kubenswrapper[4900]: I1202 15:26:04.923759 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f706357c-15e7-4c4c-abbb-d0f793926d53" path="/var/lib/kubelet/pods/f706357c-15e7-4c4c-abbb-d0f793926d53/volumes" Dec 02 15:26:09 crc kubenswrapper[4900]: I1202 15:26:09.223991 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-4622t" event={"ID":"719b71e7-a7e7-4348-8ef3-b5a3594791e7","Type":"ContainerStarted","Data":"41dc13f0cef977cbfcb4826a230158f95afe42f33bde3c5a567e66c1aa6022fb"} Dec 02 15:26:09 crc kubenswrapper[4900]: I1202 15:26:09.262999 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-4622t" podStartSLOduration=3.254417802 podStartE2EDuration="7.262976497s" podCreationTimestamp="2025-12-02 15:26:02 +0000 UTC" firstStartedPulling="2025-12-02 15:26:04.016528148 +0000 UTC m=+6209.432341999" lastFinishedPulling="2025-12-02 15:26:08.025086843 +0000 UTC m=+6213.440900694" observedRunningTime="2025-12-02 15:26:09.247179562 +0000 UTC m=+6214.662993413" watchObservedRunningTime="2025-12-02 15:26:09.262976497 +0000 UTC m=+6214.678790368" Dec 02 15:26:10 crc kubenswrapper[4900]: I1202 15:26:10.840904 4900 scope.go:117] "RemoveContainer" containerID="88dfd412f3cb057f0fc7dad09a4a77b5f326ceb5021aee76675bec48e75ff2f1" Dec 02 15:26:10 crc kubenswrapper[4900]: I1202 15:26:10.881403 4900 scope.go:117] "RemoveContainer" containerID="ed195d3cabbdb44f8b4be3d6d5021e17696247d3ac44322c75cfae752fe5f057" Dec 02 15:26:10 crc kubenswrapper[4900]: I1202 15:26:10.936799 4900 scope.go:117] "RemoveContainer" containerID="2c5b1eb059e1370ccdb24fa4364af99d28a027734a06c4458f794ca88a527251" Dec 02 15:26:10 crc kubenswrapper[4900]: I1202 15:26:10.984515 4900 scope.go:117] "RemoveContainer" containerID="399768edcc20c611c7815f0631aa03d6b5b1e68840586eef3cbc72af56107695" Dec 02 15:26:11 crc kubenswrapper[4900]: I1202 15:26:11.037988 4900 scope.go:117] "RemoveContainer" containerID="58f9aafa2275e9034f9607eccdaff84db33cf6d7757cc0a280c46da303fa667a" Dec 02 15:26:11 crc kubenswrapper[4900]: I1202 15:26:11.070750 4900 scope.go:117] "RemoveContainer" containerID="24a999f82f75daaa9aaa9a8873b8e93345a62c9d06f76b180d1408aa09f83359" Dec 02 15:26:11 crc kubenswrapper[4900]: I1202 15:26:11.118731 4900 scope.go:117] "RemoveContainer" containerID="7d995ff735a2edfb811d120417d5555f27119882ba26a352202587f9d504bb8b" Dec 02 15:26:11 crc kubenswrapper[4900]: I1202 15:26:11.262915 4900 generic.go:334] "Generic (PLEG): container finished" podID="719b71e7-a7e7-4348-8ef3-b5a3594791e7" containerID="41dc13f0cef977cbfcb4826a230158f95afe42f33bde3c5a567e66c1aa6022fb" exitCode=0 Dec 02 15:26:11 crc kubenswrapper[4900]: I1202 15:26:11.262968 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-4622t" event={"ID":"719b71e7-a7e7-4348-8ef3-b5a3594791e7","Type":"ContainerDied","Data":"41dc13f0cef977cbfcb4826a230158f95afe42f33bde3c5a567e66c1aa6022fb"} Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.800250 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-4622t" Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.938120 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbpxj\" (UniqueName: \"kubernetes.io/projected/719b71e7-a7e7-4348-8ef3-b5a3594791e7-kube-api-access-rbpxj\") pod \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.938260 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-job-config-data\") pod \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.938693 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-config-data\") pod \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.938866 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-combined-ca-bundle\") pod \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\" (UID: \"719b71e7-a7e7-4348-8ef3-b5a3594791e7\") " Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.946906 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "719b71e7-a7e7-4348-8ef3-b5a3594791e7" (UID: "719b71e7-a7e7-4348-8ef3-b5a3594791e7"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.948400 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719b71e7-a7e7-4348-8ef3-b5a3594791e7-kube-api-access-rbpxj" (OuterVolumeSpecName: "kube-api-access-rbpxj") pod "719b71e7-a7e7-4348-8ef3-b5a3594791e7" (UID: "719b71e7-a7e7-4348-8ef3-b5a3594791e7"). InnerVolumeSpecName "kube-api-access-rbpxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.949691 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-config-data" (OuterVolumeSpecName: "config-data") pod "719b71e7-a7e7-4348-8ef3-b5a3594791e7" (UID: "719b71e7-a7e7-4348-8ef3-b5a3594791e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:12 crc kubenswrapper[4900]: I1202 15:26:12.978902 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "719b71e7-a7e7-4348-8ef3-b5a3594791e7" (UID: "719b71e7-a7e7-4348-8ef3-b5a3594791e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.042339 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.042369 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.042383 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbpxj\" (UniqueName: \"kubernetes.io/projected/719b71e7-a7e7-4348-8ef3-b5a3594791e7-kube-api-access-rbpxj\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.042396 4900 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/719b71e7-a7e7-4348-8ef3-b5a3594791e7-job-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.302280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-4622t" event={"ID":"719b71e7-a7e7-4348-8ef3-b5a3594791e7","Type":"ContainerDied","Data":"7777dc94cb150cb46497856cf64a7163b83311777efdc1580c3930a0e2bd6788"} Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.302339 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7777dc94cb150cb46497856cf64a7163b83311777efdc1580c3930a0e2bd6788" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.302423 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-4622t" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.786257 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 15:26:13 crc kubenswrapper[4900]: E1202 15:26:13.789968 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719b71e7-a7e7-4348-8ef3-b5a3594791e7" containerName="manila-db-sync" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.789997 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="719b71e7-a7e7-4348-8ef3-b5a3594791e7" containerName="manila-db-sync" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.790715 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="719b71e7-a7e7-4348-8ef3-b5a3594791e7" containerName="manila-db-sync" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.792849 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.796700 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.796738 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-gvpq6" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.796755 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.796851 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.814950 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.826385 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c878c5c75-hlc5d"] Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.828701 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.857137 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c878c5c75-hlc5d"] Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867594 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-config-data\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867659 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c884k\" (UniqueName: \"kubernetes.io/projected/0613f269-0d01-4034-a25a-0128fe099674-kube-api-access-c884k\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867750 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-dns-svc\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867782 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-nb\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867796 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-config\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867876 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867902 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867957 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-sb\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.867978 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56f52fb7-926a-4e18-b6d8-1455db37189a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.868010 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-scripts\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.868062 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qxg\" (UniqueName: \"kubernetes.io/projected/56f52fb7-926a-4e18-b6d8-1455db37189a-kube-api-access-z5qxg\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.885488 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.887567 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.891942 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.916433 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969562 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5hx\" (UniqueName: \"kubernetes.io/projected/e755696b-ec67-463d-9d13-acb00739dec6-kube-api-access-2f5hx\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969621 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969697 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-config-data\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969724 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c884k\" (UniqueName: \"kubernetes.io/projected/0613f269-0d01-4034-a25a-0128fe099674-kube-api-access-c884k\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969755 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e755696b-ec67-463d-9d13-acb00739dec6-ceph\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969801 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-dns-svc\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969819 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-scripts\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969837 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-nb\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969851 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-config\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969886 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-config-data\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969920 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969939 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e755696b-ec67-463d-9d13-acb00739dec6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969955 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.969990 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e755696b-ec67-463d-9d13-acb00739dec6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.970008 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.970027 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-sb\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.970056 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56f52fb7-926a-4e18-b6d8-1455db37189a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.970080 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-scripts\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.970122 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qxg\" (UniqueName: \"kubernetes.io/projected/56f52fb7-926a-4e18-b6d8-1455db37189a-kube-api-access-z5qxg\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.970722 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56f52fb7-926a-4e18-b6d8-1455db37189a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.971499 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-config\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.971828 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-sb\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.972464 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-nb\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.972525 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-dns-svc\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.975244 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-scripts\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.978178 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.988047 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c884k\" (UniqueName: \"kubernetes.io/projected/0613f269-0d01-4034-a25a-0128fe099674-kube-api-access-c884k\") pod \"dnsmasq-dns-7c878c5c75-hlc5d\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.988626 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-config-data\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:13 crc kubenswrapper[4900]: I1202 15:26:13.990748 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qxg\" (UniqueName: \"kubernetes.io/projected/56f52fb7-926a-4e18-b6d8-1455db37189a-kube-api-access-z5qxg\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.009354 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56f52fb7-926a-4e18-b6d8-1455db37189a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"56f52fb7-926a-4e18-b6d8-1455db37189a\") " pod="openstack/manila-scheduler-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.057853 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.060371 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.063503 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072306 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5hx\" (UniqueName: \"kubernetes.io/projected/e755696b-ec67-463d-9d13-acb00739dec6-kube-api-access-2f5hx\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072374 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072448 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072475 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e755696b-ec67-463d-9d13-acb00739dec6-ceph\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072505 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-logs\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072545 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-config-data\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072577 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-scripts\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072605 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-config-data-custom\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072841 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-config-data\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072887 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgp4h\" (UniqueName: \"kubernetes.io/projected/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-kube-api-access-dgp4h\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072951 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e755696b-ec67-463d-9d13-acb00739dec6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.072987 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-scripts\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.073027 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e755696b-ec67-463d-9d13-acb00739dec6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.073059 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.073120 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.073305 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e755696b-ec67-463d-9d13-acb00739dec6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.073475 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e755696b-ec67-463d-9d13-acb00739dec6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.078486 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-scripts\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.079145 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e755696b-ec67-463d-9d13-acb00739dec6-ceph\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.083361 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-config-data\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.097131 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.102460 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e755696b-ec67-463d-9d13-acb00739dec6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.104506 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5hx\" (UniqueName: \"kubernetes.io/projected/e755696b-ec67-463d-9d13-acb00739dec6-kube-api-access-2f5hx\") pod \"manila-share-share1-0\" (UID: \"e755696b-ec67-463d-9d13-acb00739dec6\") " pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.114531 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.122378 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.157429 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.179540 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-scripts\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.179762 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.179954 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.180002 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-logs\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.180056 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-config-data\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.180101 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-config-data-custom\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.180202 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgp4h\" (UniqueName: \"kubernetes.io/projected/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-kube-api-access-dgp4h\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.186206 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.188721 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-config-data\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.189179 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-logs\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.189812 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.202598 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-scripts\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.212916 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgp4h\" (UniqueName: \"kubernetes.io/projected/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-kube-api-access-dgp4h\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.223861 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.234442 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a834bd3-12e9-44c7-8d70-24b29fd29ab1-config-data-custom\") pod \"manila-api-0\" (UID: \"3a834bd3-12e9-44c7-8d70-24b29fd29ab1\") " pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.280138 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 15:26:14 crc kubenswrapper[4900]: I1202 15:26:14.818166 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.036991 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c878c5c75-hlc5d"] Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.051736 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.116882 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.116936 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.116976 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.117784 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b89bde9e30bda55f0fc8913241034f85509c78b8e0ea65f7e6e475647a12267"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.117829 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://5b89bde9e30bda55f0fc8913241034f85509c78b8e0ea65f7e6e475647a12267" gracePeriod=600 Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.193100 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 02 15:26:15 crc kubenswrapper[4900]: W1202 15:26:15.202851 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a834bd3_12e9_44c7_8d70_24b29fd29ab1.slice/crio-942e98ad806a7162dc3043bd84982544215e9d53c38c2521769cbb634fd69e44 WatchSource:0}: Error finding container 942e98ad806a7162dc3043bd84982544215e9d53c38c2521769cbb634fd69e44: Status 404 returned error can't find the container with id 942e98ad806a7162dc3043bd84982544215e9d53c38c2521769cbb634fd69e44 Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.354681 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e755696b-ec67-463d-9d13-acb00739dec6","Type":"ContainerStarted","Data":"553955a284ea8bf73e8a3fca48b35a2d37879921147aa5239189621a1752e190"} Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.359285 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"56f52fb7-926a-4e18-b6d8-1455db37189a","Type":"ContainerStarted","Data":"5aef32b1127e5839bb7e92c78f0e9e275675f537108f0d8a2d546825e7ffb4df"} Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.360832 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" event={"ID":"0613f269-0d01-4034-a25a-0128fe099674","Type":"ContainerStarted","Data":"d649f375b77e2955dab08b3c59a221fc2beefc16e16237defec4cedf75ed061e"} Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.370804 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="5b89bde9e30bda55f0fc8913241034f85509c78b8e0ea65f7e6e475647a12267" exitCode=0 Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.370869 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"5b89bde9e30bda55f0fc8913241034f85509c78b8e0ea65f7e6e475647a12267"} Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.370903 4900 scope.go:117] "RemoveContainer" containerID="b1a1c0edd4daedb082cbaccf772bd3a573711e8016593010b2798ac615bb820e" Dec 02 15:26:15 crc kubenswrapper[4900]: I1202 15:26:15.372297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a834bd3-12e9-44c7-8d70-24b29fd29ab1","Type":"ContainerStarted","Data":"942e98ad806a7162dc3043bd84982544215e9d53c38c2521769cbb634fd69e44"} Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.406539 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a834bd3-12e9-44c7-8d70-24b29fd29ab1","Type":"ContainerStarted","Data":"e2aa24c5fe774561e71d7792b536daf245604ec33ef75eadbdac0ccc8767fe45"} Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.406855 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a834bd3-12e9-44c7-8d70-24b29fd29ab1","Type":"ContainerStarted","Data":"decbf4243569e79813d84a98a3c97d78439c78542418e50f9ab83674920e0580"} Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.406884 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.411291 4900 generic.go:334] "Generic (PLEG): container finished" podID="0613f269-0d01-4034-a25a-0128fe099674" containerID="914f588512f60c64c84d1bc2d554ff2c871799c3bb17299d3eb802ecfdd7b6ea" exitCode=0 Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.411585 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" event={"ID":"0613f269-0d01-4034-a25a-0128fe099674","Type":"ContainerDied","Data":"914f588512f60c64c84d1bc2d554ff2c871799c3bb17299d3eb802ecfdd7b6ea"} Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.413858 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3"} Dec 02 15:26:16 crc kubenswrapper[4900]: I1202 15:26:16.441991 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.441958699 podStartE2EDuration="2.441958699s" podCreationTimestamp="2025-12-02 15:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:26:16.428102609 +0000 UTC m=+6221.843916460" watchObservedRunningTime="2025-12-02 15:26:16.441958699 +0000 UTC m=+6221.857772540" Dec 02 15:26:17 crc kubenswrapper[4900]: I1202 15:26:17.027335 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf5cj"] Dec 02 15:26:17 crc kubenswrapper[4900]: I1202 15:26:17.047373 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jf5cj"] Dec 02 15:26:17 crc kubenswrapper[4900]: I1202 15:26:17.430935 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" event={"ID":"0613f269-0d01-4034-a25a-0128fe099674","Type":"ContainerStarted","Data":"14942743b19f1a1ae8d023c9ad1ea792068729dfc8b9ff42101eb8f960a181bc"} Dec 02 15:26:17 crc kubenswrapper[4900]: I1202 15:26:17.431724 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:17 crc kubenswrapper[4900]: I1202 15:26:17.513447 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" podStartSLOduration=4.513428291 podStartE2EDuration="4.513428291s" podCreationTimestamp="2025-12-02 15:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:26:17.493956343 +0000 UTC m=+6222.909770194" watchObservedRunningTime="2025-12-02 15:26:17.513428291 +0000 UTC m=+6222.929242142" Dec 02 15:26:18 crc kubenswrapper[4900]: I1202 15:26:18.030582 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf82k"] Dec 02 15:26:18 crc kubenswrapper[4900]: I1202 15:26:18.044119 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf82k"] Dec 02 15:26:18 crc kubenswrapper[4900]: I1202 15:26:18.445440 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"56f52fb7-926a-4e18-b6d8-1455db37189a","Type":"ContainerStarted","Data":"c2ce9bc3227d9cf62ed8712cdb9414efd1553b3c5f922475ade50d88c8342d9f"} Dec 02 15:26:18 crc kubenswrapper[4900]: I1202 15:26:18.933371 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebb257b-4ffb-40bd-a873-77699f11ee7f" path="/var/lib/kubelet/pods/cebb257b-4ffb-40bd-a873-77699f11ee7f/volumes" Dec 02 15:26:18 crc kubenswrapper[4900]: I1202 15:26:18.936831 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb90faf1-bd20-4634-93f6-977be90a5a93" path="/var/lib/kubelet/pods/fb90faf1-bd20-4634-93f6-977be90a5a93/volumes" Dec 02 15:26:19 crc kubenswrapper[4900]: I1202 15:26:19.374914 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 15:26:19 crc kubenswrapper[4900]: I1202 15:26:19.471525 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"56f52fb7-926a-4e18-b6d8-1455db37189a","Type":"ContainerStarted","Data":"bfff4f31bbba42f637fb3e7a10e641e4ba3802a72d870df6b541dbff948fbfbf"} Dec 02 15:26:19 crc kubenswrapper[4900]: I1202 15:26:19.489910 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.513348898 podStartE2EDuration="6.489896231s" podCreationTimestamp="2025-12-02 15:26:13 +0000 UTC" firstStartedPulling="2025-12-02 15:26:14.826778732 +0000 UTC m=+6220.242592583" lastFinishedPulling="2025-12-02 15:26:16.803326065 +0000 UTC m=+6222.219139916" observedRunningTime="2025-12-02 15:26:19.488063789 +0000 UTC m=+6224.903877640" watchObservedRunningTime="2025-12-02 15:26:19.489896231 +0000 UTC m=+6224.905710082" Dec 02 15:26:22 crc kubenswrapper[4900]: I1202 15:26:22.498344 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e755696b-ec67-463d-9d13-acb00739dec6","Type":"ContainerStarted","Data":"3e04a5ab5142f794275847c64aecf4ebd4e45e37332703e75f224dee165ed80a"} Dec 02 15:26:23 crc kubenswrapper[4900]: I1202 15:26:23.525887 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e755696b-ec67-463d-9d13-acb00739dec6","Type":"ContainerStarted","Data":"4dca127c8b100157b90142d7aa5be871ac10ed038b8722a0b778aeb46246b341"} Dec 02 15:26:23 crc kubenswrapper[4900]: I1202 15:26:23.580065 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.033871551 podStartE2EDuration="10.580039721s" podCreationTimestamp="2025-12-02 15:26:13 +0000 UTC" firstStartedPulling="2025-12-02 15:26:15.082882617 +0000 UTC m=+6220.498696468" lastFinishedPulling="2025-12-02 15:26:21.629050777 +0000 UTC m=+6227.044864638" observedRunningTime="2025-12-02 15:26:23.565143102 +0000 UTC m=+6228.980956983" watchObservedRunningTime="2025-12-02 15:26:23.580039721 +0000 UTC m=+6228.995853592" Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.123566 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.160510 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.224986 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.253880 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d894f77c-x8xtc"] Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.254256 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerName="dnsmasq-dns" containerID="cri-o://217076c6b2a28eeba77a19aef71169b0bf375bd7cf643fefefa08566c98775b8" gracePeriod=10 Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.543324 4900 generic.go:334] "Generic (PLEG): container finished" podID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerID="217076c6b2a28eeba77a19aef71169b0bf375bd7cf643fefefa08566c98775b8" exitCode=0 Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.543378 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" event={"ID":"3b477c94-42a7-4865-849a-d7a4a77c17dc","Type":"ContainerDied","Data":"217076c6b2a28eeba77a19aef71169b0bf375bd7cf643fefefa08566c98775b8"} Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.818093 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.897337 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lw7j\" (UniqueName: \"kubernetes.io/projected/3b477c94-42a7-4865-849a-d7a4a77c17dc-kube-api-access-2lw7j\") pod \"3b477c94-42a7-4865-849a-d7a4a77c17dc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.897408 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-dns-svc\") pod \"3b477c94-42a7-4865-849a-d7a4a77c17dc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.897526 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-sb\") pod \"3b477c94-42a7-4865-849a-d7a4a77c17dc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.897564 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-nb\") pod \"3b477c94-42a7-4865-849a-d7a4a77c17dc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.897636 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-config\") pod \"3b477c94-42a7-4865-849a-d7a4a77c17dc\" (UID: \"3b477c94-42a7-4865-849a-d7a4a77c17dc\") " Dec 02 15:26:24 crc kubenswrapper[4900]: I1202 15:26:24.912893 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b477c94-42a7-4865-849a-d7a4a77c17dc-kube-api-access-2lw7j" (OuterVolumeSpecName: "kube-api-access-2lw7j") pod "3b477c94-42a7-4865-849a-d7a4a77c17dc" (UID: "3b477c94-42a7-4865-849a-d7a4a77c17dc"). InnerVolumeSpecName "kube-api-access-2lw7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:24.999927 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lw7j\" (UniqueName: \"kubernetes.io/projected/3b477c94-42a7-4865-849a-d7a4a77c17dc-kube-api-access-2lw7j\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.036308 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3b477c94-42a7-4865-849a-d7a4a77c17dc" (UID: "3b477c94-42a7-4865-849a-d7a4a77c17dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.082215 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3b477c94-42a7-4865-849a-d7a4a77c17dc" (UID: "3b477c94-42a7-4865-849a-d7a4a77c17dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.090468 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3b477c94-42a7-4865-849a-d7a4a77c17dc" (UID: "3b477c94-42a7-4865-849a-d7a4a77c17dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.098203 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-config" (OuterVolumeSpecName: "config") pod "3b477c94-42a7-4865-849a-d7a4a77c17dc" (UID: "3b477c94-42a7-4865-849a-d7a4a77c17dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.102278 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.102419 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.102489 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.102547 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b477c94-42a7-4865-849a-d7a4a77c17dc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.554175 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" event={"ID":"3b477c94-42a7-4865-849a-d7a4a77c17dc","Type":"ContainerDied","Data":"66b896a1c4e2f738474c9cb8937a7bf43d351597c1e7121d0fea027fbc3ad203"} Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.554434 4900 scope.go:117] "RemoveContainer" containerID="217076c6b2a28eeba77a19aef71169b0bf375bd7cf643fefefa08566c98775b8" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.554245 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d894f77c-x8xtc" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.587343 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d894f77c-x8xtc"] Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.588283 4900 scope.go:117] "RemoveContainer" containerID="d8b8d368ecb21e47bfb083c9add34b61383e431d44e67fd4ca7560e9c48ed7ef" Dec 02 15:26:25 crc kubenswrapper[4900]: I1202 15:26:25.607252 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d894f77c-x8xtc"] Dec 02 15:26:26 crc kubenswrapper[4900]: I1202 15:26:26.718223 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:26:26 crc kubenswrapper[4900]: I1202 15:26:26.718961 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-central-agent" containerID="cri-o://9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c" gracePeriod=30 Dec 02 15:26:26 crc kubenswrapper[4900]: I1202 15:26:26.719477 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="proxy-httpd" containerID="cri-o://7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97" gracePeriod=30 Dec 02 15:26:26 crc kubenswrapper[4900]: I1202 15:26:26.719562 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="sg-core" containerID="cri-o://1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720" gracePeriod=30 Dec 02 15:26:26 crc kubenswrapper[4900]: I1202 15:26:26.719633 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-notification-agent" containerID="cri-o://591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1" gracePeriod=30 Dec 02 15:26:26 crc kubenswrapper[4900]: I1202 15:26:26.923398 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" path="/var/lib/kubelet/pods/3b477c94-42a7-4865-849a-d7a4a77c17dc/volumes" Dec 02 15:26:27 crc kubenswrapper[4900]: I1202 15:26:27.577473 4900 generic.go:334] "Generic (PLEG): container finished" podID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerID="7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97" exitCode=0 Dec 02 15:26:27 crc kubenswrapper[4900]: I1202 15:26:27.577712 4900 generic.go:334] "Generic (PLEG): container finished" podID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerID="1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720" exitCode=2 Dec 02 15:26:27 crc kubenswrapper[4900]: I1202 15:26:27.577539 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerDied","Data":"7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97"} Dec 02 15:26:27 crc kubenswrapper[4900]: I1202 15:26:27.577736 4900 generic.go:334] "Generic (PLEG): container finished" podID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerID="9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c" exitCode=0 Dec 02 15:26:27 crc kubenswrapper[4900]: I1202 15:26:27.577754 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerDied","Data":"1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720"} Dec 02 15:26:27 crc kubenswrapper[4900]: I1202 15:26:27.577771 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerDied","Data":"9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c"} Dec 02 15:26:29 crc kubenswrapper[4900]: E1202 15:26:29.854437 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec112f4_de32_4bfa_87be_2e86d404c4e8.slice/crio-conmon-591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec112f4_de32_4bfa_87be_2e86d404c4e8.slice/crio-591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.124371 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.249486 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvrfp\" (UniqueName: \"kubernetes.io/projected/7ec112f4-de32-4bfa-87be-2e86d404c4e8-kube-api-access-dvrfp\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.249846 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-run-httpd\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.250427 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.251551 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-sg-core-conf-yaml\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.251837 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-combined-ca-bundle\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.251900 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-config-data\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.252044 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-scripts\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.252088 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-log-httpd\") pod \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\" (UID: \"7ec112f4-de32-4bfa-87be-2e86d404c4e8\") " Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.252665 4900 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.253317 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.256144 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec112f4-de32-4bfa-87be-2e86d404c4e8-kube-api-access-dvrfp" (OuterVolumeSpecName: "kube-api-access-dvrfp") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "kube-api-access-dvrfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.268196 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-scripts" (OuterVolumeSpecName: "scripts") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.279873 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.338681 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.355801 4900 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.355838 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.355854 4900 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.355868 4900 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ec112f4-de32-4bfa-87be-2e86d404c4e8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.355909 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvrfp\" (UniqueName: \"kubernetes.io/projected/7ec112f4-de32-4bfa-87be-2e86d404c4e8-kube-api-access-dvrfp\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.364432 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-config-data" (OuterVolumeSpecName: "config-data") pod "7ec112f4-de32-4bfa-87be-2e86d404c4e8" (UID: "7ec112f4-de32-4bfa-87be-2e86d404c4e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.457853 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec112f4-de32-4bfa-87be-2e86d404c4e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.627517 4900 generic.go:334] "Generic (PLEG): container finished" podID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerID="591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1" exitCode=0 Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.627555 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerDied","Data":"591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1"} Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.627576 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.627588 4900 scope.go:117] "RemoveContainer" containerID="7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.627579 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ec112f4-de32-4bfa-87be-2e86d404c4e8","Type":"ContainerDied","Data":"0373b9d07f37a915171a733edf4044fcbcc6d387d05b2a7f00c91d0cb81dc9ae"} Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.658721 4900 scope.go:117] "RemoveContainer" containerID="1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.661139 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.670232 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.684197 4900 scope.go:117] "RemoveContainer" containerID="591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.700698 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.701192 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerName="init" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701211 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerName="init" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.701232 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="proxy-httpd" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701239 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="proxy-httpd" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.701261 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-notification-agent" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701268 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-notification-agent" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.701279 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="sg-core" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701285 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="sg-core" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.701295 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-central-agent" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701300 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-central-agent" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.701310 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerName="dnsmasq-dns" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701315 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerName="dnsmasq-dns" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701516 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="sg-core" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701530 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-notification-agent" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701544 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="ceilometer-central-agent" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701555 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" containerName="proxy-httpd" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.701571 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b477c94-42a7-4865-849a-d7a4a77c17dc" containerName="dnsmasq-dns" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.707292 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.709976 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.710184 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.715064 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.735973 4900 scope.go:117] "RemoveContainer" containerID="9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.762615 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.762692 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-run-httpd\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.762725 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-config-data\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.762833 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-log-httpd\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.762887 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.762959 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-scripts\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.763027 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr42\" (UniqueName: \"kubernetes.io/projected/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-kube-api-access-bmr42\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.802054 4900 scope.go:117] "RemoveContainer" containerID="7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.803283 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97\": container with ID starting with 7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97 not found: ID does not exist" containerID="7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.803322 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97"} err="failed to get container status \"7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97\": rpc error: code = NotFound desc = could not find container \"7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97\": container with ID starting with 7374a695cdacf2ac7a9da19ed408055e5315a933a89493902d2ea3e505d78b97 not found: ID does not exist" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.803357 4900 scope.go:117] "RemoveContainer" containerID="1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.803593 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720\": container with ID starting with 1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720 not found: ID does not exist" containerID="1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.803614 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720"} err="failed to get container status \"1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720\": rpc error: code = NotFound desc = could not find container \"1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720\": container with ID starting with 1a19ba431afe867c959e09450ce589ccd3dd52aa22225c4940b38c4eef61f720 not found: ID does not exist" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.803627 4900 scope.go:117] "RemoveContainer" containerID="591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.803923 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1\": container with ID starting with 591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1 not found: ID does not exist" containerID="591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.803943 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1"} err="failed to get container status \"591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1\": rpc error: code = NotFound desc = could not find container \"591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1\": container with ID starting with 591d553b0c6d183bc629fe433aec0b89844534ada168e8b7aad92deb46162fa1 not found: ID does not exist" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.803956 4900 scope.go:117] "RemoveContainer" containerID="9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c" Dec 02 15:26:30 crc kubenswrapper[4900]: E1202 15:26:30.804246 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c\": container with ID starting with 9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c not found: ID does not exist" containerID="9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.804299 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c"} err="failed to get container status \"9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c\": rpc error: code = NotFound desc = could not find container \"9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c\": container with ID starting with 9f57b0879e69dc66bd91fb599c80d067f1290a3515d155d7ba34bd4df45ae46c not found: ID does not exist" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.864918 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmr42\" (UniqueName: \"kubernetes.io/projected/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-kube-api-access-bmr42\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.865043 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.865074 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-run-httpd\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.865105 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-config-data\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.865190 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-log-httpd\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.865238 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.865289 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-scripts\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.866007 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-log-httpd\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.866007 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-run-httpd\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.870222 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.870639 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-scripts\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.870961 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.871993 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-config-data\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.881877 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmr42\" (UniqueName: \"kubernetes.io/projected/c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe-kube-api-access-bmr42\") pod \"ceilometer-0\" (UID: \"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe\") " pod="openstack/ceilometer-0" Dec 02 15:26:30 crc kubenswrapper[4900]: I1202 15:26:30.924803 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec112f4-de32-4bfa-87be-2e86d404c4e8" path="/var/lib/kubelet/pods/7ec112f4-de32-4bfa-87be-2e86d404c4e8/volumes" Dec 02 15:26:31 crc kubenswrapper[4900]: I1202 15:26:31.032090 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 15:26:31 crc kubenswrapper[4900]: I1202 15:26:31.515614 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 15:26:31 crc kubenswrapper[4900]: W1202 15:26:31.517275 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc24c627b_1ab7_4b1e_a1ec_bf182e9ef7fe.slice/crio-7776c87134272b5cedc660c50324f63d08d09fce380b02935a1ba9a39611da99 WatchSource:0}: Error finding container 7776c87134272b5cedc660c50324f63d08d09fce380b02935a1ba9a39611da99: Status 404 returned error can't find the container with id 7776c87134272b5cedc660c50324f63d08d09fce380b02935a1ba9a39611da99 Dec 02 15:26:31 crc kubenswrapper[4900]: I1202 15:26:31.641948 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe","Type":"ContainerStarted","Data":"7776c87134272b5cedc660c50324f63d08d09fce380b02935a1ba9a39611da99"} Dec 02 15:26:32 crc kubenswrapper[4900]: I1202 15:26:32.657759 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe","Type":"ContainerStarted","Data":"753c1fdfa59ebb81c54a5707f51a73205561e1aed0da2f4a3fa7bc63b8a914f6"} Dec 02 15:26:33 crc kubenswrapper[4900]: I1202 15:26:33.668251 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe","Type":"ContainerStarted","Data":"0063d3a53ad2925b410da2ffd780536f52cfff50d72f22b1408baa2c81189139"} Dec 02 15:26:35 crc kubenswrapper[4900]: I1202 15:26:35.680279 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Dec 02 15:26:35 crc kubenswrapper[4900]: I1202 15:26:35.690011 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe","Type":"ContainerStarted","Data":"4b73f87f572e3ee05713d60753bccf9a8e10b7c431bf75111a716f1bcd4647cc"} Dec 02 15:26:36 crc kubenswrapper[4900]: I1202 15:26:36.807604 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 02 15:26:36 crc kubenswrapper[4900]: I1202 15:26:36.903620 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 02 15:26:37 crc kubenswrapper[4900]: I1202 15:26:37.060630 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dbh6d"] Dec 02 15:26:37 crc kubenswrapper[4900]: I1202 15:26:37.070494 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dbh6d"] Dec 02 15:26:37 crc kubenswrapper[4900]: I1202 15:26:37.720752 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe","Type":"ContainerStarted","Data":"0a6014d95277dbd60f2c1f8f585d011ec50a67a8d7a083d8deae20c3b498c0ff"} Dec 02 15:26:37 crc kubenswrapper[4900]: I1202 15:26:37.721158 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 15:26:37 crc kubenswrapper[4900]: I1202 15:26:37.751156 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.563784234 podStartE2EDuration="7.751134779s" podCreationTimestamp="2025-12-02 15:26:30 +0000 UTC" firstStartedPulling="2025-12-02 15:26:31.519566439 +0000 UTC m=+6236.935380280" lastFinishedPulling="2025-12-02 15:26:36.706916974 +0000 UTC m=+6242.122730825" observedRunningTime="2025-12-02 15:26:37.743750421 +0000 UTC m=+6243.159564292" watchObservedRunningTime="2025-12-02 15:26:37.751134779 +0000 UTC m=+6243.166948630" Dec 02 15:26:38 crc kubenswrapper[4900]: I1202 15:26:38.924723 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee15c717-27cd-48d5-b6d4-6be6043dbb1c" path="/var/lib/kubelet/pods/ee15c717-27cd-48d5-b6d4-6be6043dbb1c/volumes" Dec 02 15:27:01 crc kubenswrapper[4900]: I1202 15:27:01.038836 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 15:27:11 crc kubenswrapper[4900]: I1202 15:27:11.319162 4900 scope.go:117] "RemoveContainer" containerID="b08b8423171fdebb7fc3bd335bb2f46dd16200df8f085cc0cf605430f8c3b34b" Dec 02 15:27:11 crc kubenswrapper[4900]: I1202 15:27:11.350679 4900 scope.go:117] "RemoveContainer" containerID="9645877a1bf80624d27cc510d1d383b6d9542a151c58ab6da2e086cdb6586430" Dec 02 15:27:11 crc kubenswrapper[4900]: I1202 15:27:11.395691 4900 scope.go:117] "RemoveContainer" containerID="d3dfb294061abdd68c573fddbd64b52ce0d022bfedc07cd5694677da4da351f4" Dec 02 15:27:11 crc kubenswrapper[4900]: I1202 15:27:11.441948 4900 scope.go:117] "RemoveContainer" containerID="9a32dbc22f87b698cf0d79d9c763e83142f6f8c7381a3f95e9b9ce88a1e04db3" Dec 02 15:27:11 crc kubenswrapper[4900]: I1202 15:27:11.483729 4900 scope.go:117] "RemoveContainer" containerID="75adc67660f25f19d773cae4817514e6acb04f8d1ff7bd4636698aed4ba23d3d" Dec 02 15:27:19 crc kubenswrapper[4900]: I1202 15:27:19.068369 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-28b4-account-create-update-8xg7x"] Dec 02 15:27:19 crc kubenswrapper[4900]: I1202 15:27:19.082746 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-28b4-account-create-update-8xg7x"] Dec 02 15:27:20 crc kubenswrapper[4900]: I1202 15:27:20.030142 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zxh7q"] Dec 02 15:27:20 crc kubenswrapper[4900]: I1202 15:27:20.042392 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zxh7q"] Dec 02 15:27:20 crc kubenswrapper[4900]: I1202 15:27:20.928844 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f29261-e4ef-4678-baa6-faf96aebd096" path="/var/lib/kubelet/pods/29f29261-e4ef-4678-baa6-faf96aebd096/volumes" Dec 02 15:27:20 crc kubenswrapper[4900]: I1202 15:27:20.929600 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54cf169b-48e6-4515-8378-8301c440501e" path="/var/lib/kubelet/pods/54cf169b-48e6-4515-8378-8301c440501e/volumes" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.458230 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b597478c5-2r4f4"] Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.460701 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.462898 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.494604 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b597478c5-2r4f4"] Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.561343 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-dns-svc\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.561428 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-openstack-cell1\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.561513 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlsr\" (UniqueName: \"kubernetes.io/projected/214af79c-cfe9-402c-952c-f266f24dc165-kube-api-access-cqlsr\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.561543 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-config\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.561736 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-sb\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.561835 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-nb\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.664166 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-sb\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.664227 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-nb\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.664281 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-dns-svc\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.664322 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-openstack-cell1\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.664392 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlsr\" (UniqueName: \"kubernetes.io/projected/214af79c-cfe9-402c-952c-f266f24dc165-kube-api-access-cqlsr\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.664420 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-config\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.665239 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-sb\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.665510 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-openstack-cell1\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.665600 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-config\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.666131 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-nb\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.666335 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-dns-svc\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.697270 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlsr\" (UniqueName: \"kubernetes.io/projected/214af79c-cfe9-402c-952c-f266f24dc165-kube-api-access-cqlsr\") pod \"dnsmasq-dns-7b597478c5-2r4f4\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:23 crc kubenswrapper[4900]: I1202 15:27:23.784514 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:24 crc kubenswrapper[4900]: I1202 15:27:24.311066 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b597478c5-2r4f4"] Dec 02 15:27:25 crc kubenswrapper[4900]: I1202 15:27:25.285904 4900 generic.go:334] "Generic (PLEG): container finished" podID="214af79c-cfe9-402c-952c-f266f24dc165" containerID="6cf58cd0ad6c7cb4ca2c1e0d3f3ff6a7c5777a7c478de45070bc05c641f44902" exitCode=0 Dec 02 15:27:25 crc kubenswrapper[4900]: I1202 15:27:25.286001 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" event={"ID":"214af79c-cfe9-402c-952c-f266f24dc165","Type":"ContainerDied","Data":"6cf58cd0ad6c7cb4ca2c1e0d3f3ff6a7c5777a7c478de45070bc05c641f44902"} Dec 02 15:27:25 crc kubenswrapper[4900]: I1202 15:27:25.286291 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" event={"ID":"214af79c-cfe9-402c-952c-f266f24dc165","Type":"ContainerStarted","Data":"3aabae5b59987d1166b7877f7463acc738152a5e731c28263c5702ec63eddc03"} Dec 02 15:27:26 crc kubenswrapper[4900]: I1202 15:27:26.304719 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" event={"ID":"214af79c-cfe9-402c-952c-f266f24dc165","Type":"ContainerStarted","Data":"be8e006053f622cebbd0f7b11be2133b458d433de84c7372a594ad10874d1a22"} Dec 02 15:27:26 crc kubenswrapper[4900]: I1202 15:27:26.305051 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:26 crc kubenswrapper[4900]: I1202 15:27:26.354998 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" podStartSLOduration=3.354968606 podStartE2EDuration="3.354968606s" podCreationTimestamp="2025-12-02 15:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:27:26.332546145 +0000 UTC m=+6291.748360036" watchObservedRunningTime="2025-12-02 15:27:26.354968606 +0000 UTC m=+6291.770782497" Dec 02 15:27:27 crc kubenswrapper[4900]: I1202 15:27:27.045112 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wrh6x"] Dec 02 15:27:27 crc kubenswrapper[4900]: I1202 15:27:27.060161 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wrh6x"] Dec 02 15:27:28 crc kubenswrapper[4900]: I1202 15:27:28.924946 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f4a89e-fa23-47be-9747-b74a65735d5c" path="/var/lib/kubelet/pods/43f4a89e-fa23-47be-9747-b74a65735d5c/volumes" Dec 02 15:27:33 crc kubenswrapper[4900]: I1202 15:27:33.785900 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:33 crc kubenswrapper[4900]: I1202 15:27:33.846844 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c878c5c75-hlc5d"] Dec 02 15:27:33 crc kubenswrapper[4900]: I1202 15:27:33.847547 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="dnsmasq-dns" containerID="cri-o://14942743b19f1a1ae8d023c9ad1ea792068729dfc8b9ff42101eb8f960a181bc" gracePeriod=10 Dec 02 15:27:33 crc kubenswrapper[4900]: I1202 15:27:33.985134 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7799f47d95-mhbvd"] Dec 02 15:27:33 crc kubenswrapper[4900]: I1202 15:27:33.987179 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.016306 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7799f47d95-mhbvd"] Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.123530 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-openstack-cell1\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.123586 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-config\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.123617 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.123639 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-dns-svc\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.125011 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.125039 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4gt\" (UniqueName: \"kubernetes.io/projected/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-kube-api-access-8v4gt\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.226674 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-openstack-cell1\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.226715 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-config\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.226741 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.226761 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-dns-svc\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.226840 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.226859 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4gt\" (UniqueName: \"kubernetes.io/projected/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-kube-api-access-8v4gt\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.228654 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-openstack-cell1\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.229151 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-config\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.229634 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.230329 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-dns-svc\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.230512 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.255598 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4gt\" (UniqueName: \"kubernetes.io/projected/6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4-kube-api-access-8v4gt\") pod \"dnsmasq-dns-7799f47d95-mhbvd\" (UID: \"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4\") " pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.340243 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.460011 4900 generic.go:334] "Generic (PLEG): container finished" podID="0613f269-0d01-4034-a25a-0128fe099674" containerID="14942743b19f1a1ae8d023c9ad1ea792068729dfc8b9ff42101eb8f960a181bc" exitCode=0 Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.460334 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" event={"ID":"0613f269-0d01-4034-a25a-0128fe099674","Type":"ContainerDied","Data":"14942743b19f1a1ae8d023c9ad1ea792068729dfc8b9ff42101eb8f960a181bc"} Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.582271 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.738572 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-sb\") pod \"0613f269-0d01-4034-a25a-0128fe099674\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.738714 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-dns-svc\") pod \"0613f269-0d01-4034-a25a-0128fe099674\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.738775 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-nb\") pod \"0613f269-0d01-4034-a25a-0128fe099674\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.738814 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-config\") pod \"0613f269-0d01-4034-a25a-0128fe099674\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.738907 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c884k\" (UniqueName: \"kubernetes.io/projected/0613f269-0d01-4034-a25a-0128fe099674-kube-api-access-c884k\") pod \"0613f269-0d01-4034-a25a-0128fe099674\" (UID: \"0613f269-0d01-4034-a25a-0128fe099674\") " Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.744223 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0613f269-0d01-4034-a25a-0128fe099674-kube-api-access-c884k" (OuterVolumeSpecName: "kube-api-access-c884k") pod "0613f269-0d01-4034-a25a-0128fe099674" (UID: "0613f269-0d01-4034-a25a-0128fe099674"). InnerVolumeSpecName "kube-api-access-c884k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.794179 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0613f269-0d01-4034-a25a-0128fe099674" (UID: "0613f269-0d01-4034-a25a-0128fe099674"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.798009 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-config" (OuterVolumeSpecName: "config") pod "0613f269-0d01-4034-a25a-0128fe099674" (UID: "0613f269-0d01-4034-a25a-0128fe099674"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.801783 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0613f269-0d01-4034-a25a-0128fe099674" (UID: "0613f269-0d01-4034-a25a-0128fe099674"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.803186 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0613f269-0d01-4034-a25a-0128fe099674" (UID: "0613f269-0d01-4034-a25a-0128fe099674"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.842756 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c884k\" (UniqueName: \"kubernetes.io/projected/0613f269-0d01-4034-a25a-0128fe099674-kube-api-access-c884k\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.842823 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.842839 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.842852 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.842864 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0613f269-0d01-4034-a25a-0128fe099674-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:34 crc kubenswrapper[4900]: I1202 15:27:34.981442 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7799f47d95-mhbvd"] Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.478437 4900 generic.go:334] "Generic (PLEG): container finished" podID="6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4" containerID="a569c63c3c39fa0f6bfb60c62dc9840eac8720ad6628c5ea5f3bd4e87484eb23" exitCode=0 Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.478703 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" event={"ID":"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4","Type":"ContainerDied","Data":"a569c63c3c39fa0f6bfb60c62dc9840eac8720ad6628c5ea5f3bd4e87484eb23"} Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.478748 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" event={"ID":"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4","Type":"ContainerStarted","Data":"140fb3c436fc810214fab1dc7db7c6edd42af42f45818f1505eacb9b9bb8f43c"} Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.485322 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" event={"ID":"0613f269-0d01-4034-a25a-0128fe099674","Type":"ContainerDied","Data":"d649f375b77e2955dab08b3c59a221fc2beefc16e16237defec4cedf75ed061e"} Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.485370 4900 scope.go:117] "RemoveContainer" containerID="14942743b19f1a1ae8d023c9ad1ea792068729dfc8b9ff42101eb8f960a181bc" Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.485507 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.643236 4900 scope.go:117] "RemoveContainer" containerID="914f588512f60c64c84d1bc2d554ff2c871799c3bb17299d3eb802ecfdd7b6ea" Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.685611 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c878c5c75-hlc5d"] Dec 02 15:27:35 crc kubenswrapper[4900]: I1202 15:27:35.701551 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c878c5c75-hlc5d"] Dec 02 15:27:36 crc kubenswrapper[4900]: I1202 15:27:36.498811 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" event={"ID":"6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4","Type":"ContainerStarted","Data":"088a704bfe5d101038ae2ea0a052dc2c13f43a6d2d849dd968e643364ba65858"} Dec 02 15:27:36 crc kubenswrapper[4900]: I1202 15:27:36.499392 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:36 crc kubenswrapper[4900]: I1202 15:27:36.523069 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" podStartSLOduration=3.523050844 podStartE2EDuration="3.523050844s" podCreationTimestamp="2025-12-02 15:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:27:36.521424559 +0000 UTC m=+6301.937238410" watchObservedRunningTime="2025-12-02 15:27:36.523050844 +0000 UTC m=+6301.938864705" Dec 02 15:27:36 crc kubenswrapper[4900]: I1202 15:27:36.919783 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0613f269-0d01-4034-a25a-0128fe099674" path="/var/lib/kubelet/pods/0613f269-0d01-4034-a25a-0128fe099674/volumes" Dec 02 15:27:39 crc kubenswrapper[4900]: I1202 15:27:39.159972 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c878c5c75-hlc5d" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.141:5353: i/o timeout" Dec 02 15:27:44 crc kubenswrapper[4900]: I1202 15:27:44.341859 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7799f47d95-mhbvd" Dec 02 15:27:44 crc kubenswrapper[4900]: I1202 15:27:44.427634 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b597478c5-2r4f4"] Dec 02 15:27:44 crc kubenswrapper[4900]: I1202 15:27:44.427935 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" podUID="214af79c-cfe9-402c-952c-f266f24dc165" containerName="dnsmasq-dns" containerID="cri-o://be8e006053f622cebbd0f7b11be2133b458d433de84c7372a594ad10874d1a22" gracePeriod=10 Dec 02 15:27:44 crc kubenswrapper[4900]: I1202 15:27:44.614193 4900 generic.go:334] "Generic (PLEG): container finished" podID="214af79c-cfe9-402c-952c-f266f24dc165" containerID="be8e006053f622cebbd0f7b11be2133b458d433de84c7372a594ad10874d1a22" exitCode=0 Dec 02 15:27:44 crc kubenswrapper[4900]: I1202 15:27:44.614342 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" event={"ID":"214af79c-cfe9-402c-952c-f266f24dc165","Type":"ContainerDied","Data":"be8e006053f622cebbd0f7b11be2133b458d433de84c7372a594ad10874d1a22"} Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.004997 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.182712 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-nb\") pod \"214af79c-cfe9-402c-952c-f266f24dc165\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.182833 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-config\") pod \"214af79c-cfe9-402c-952c-f266f24dc165\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.182880 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-openstack-cell1\") pod \"214af79c-cfe9-402c-952c-f266f24dc165\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.182965 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-dns-svc\") pod \"214af79c-cfe9-402c-952c-f266f24dc165\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.183081 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-sb\") pod \"214af79c-cfe9-402c-952c-f266f24dc165\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.183116 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqlsr\" (UniqueName: \"kubernetes.io/projected/214af79c-cfe9-402c-952c-f266f24dc165-kube-api-access-cqlsr\") pod \"214af79c-cfe9-402c-952c-f266f24dc165\" (UID: \"214af79c-cfe9-402c-952c-f266f24dc165\") " Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.199883 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214af79c-cfe9-402c-952c-f266f24dc165-kube-api-access-cqlsr" (OuterVolumeSpecName: "kube-api-access-cqlsr") pod "214af79c-cfe9-402c-952c-f266f24dc165" (UID: "214af79c-cfe9-402c-952c-f266f24dc165"). InnerVolumeSpecName "kube-api-access-cqlsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.235458 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "214af79c-cfe9-402c-952c-f266f24dc165" (UID: "214af79c-cfe9-402c-952c-f266f24dc165"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.239999 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "214af79c-cfe9-402c-952c-f266f24dc165" (UID: "214af79c-cfe9-402c-952c-f266f24dc165"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.242908 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "214af79c-cfe9-402c-952c-f266f24dc165" (UID: "214af79c-cfe9-402c-952c-f266f24dc165"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.243170 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "214af79c-cfe9-402c-952c-f266f24dc165" (UID: "214af79c-cfe9-402c-952c-f266f24dc165"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.258283 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-config" (OuterVolumeSpecName: "config") pod "214af79c-cfe9-402c-952c-f266f24dc165" (UID: "214af79c-cfe9-402c-952c-f266f24dc165"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.286818 4900 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.287056 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.287145 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqlsr\" (UniqueName: \"kubernetes.io/projected/214af79c-cfe9-402c-952c-f266f24dc165-kube-api-access-cqlsr\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.287221 4900 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.287295 4900 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.287370 4900 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/214af79c-cfe9-402c-952c-f266f24dc165-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.625554 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" event={"ID":"214af79c-cfe9-402c-952c-f266f24dc165","Type":"ContainerDied","Data":"3aabae5b59987d1166b7877f7463acc738152a5e731c28263c5702ec63eddc03"} Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.625608 4900 scope.go:117] "RemoveContainer" containerID="be8e006053f622cebbd0f7b11be2133b458d433de84c7372a594ad10874d1a22" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.625638 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b597478c5-2r4f4" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.663323 4900 scope.go:117] "RemoveContainer" containerID="6cf58cd0ad6c7cb4ca2c1e0d3f3ff6a7c5777a7c478de45070bc05c641f44902" Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.686126 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b597478c5-2r4f4"] Dec 02 15:27:45 crc kubenswrapper[4900]: I1202 15:27:45.713335 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b597478c5-2r4f4"] Dec 02 15:27:46 crc kubenswrapper[4900]: I1202 15:27:46.921030 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214af79c-cfe9-402c-952c-f266f24dc165" path="/var/lib/kubelet/pods/214af79c-cfe9-402c-952c-f266f24dc165/volumes" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.854805 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h"] Dec 02 15:27:55 crc kubenswrapper[4900]: E1202 15:27:55.855745 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="init" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.855759 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="init" Dec 02 15:27:55 crc kubenswrapper[4900]: E1202 15:27:55.855781 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214af79c-cfe9-402c-952c-f266f24dc165" containerName="dnsmasq-dns" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.855787 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="214af79c-cfe9-402c-952c-f266f24dc165" containerName="dnsmasq-dns" Dec 02 15:27:55 crc kubenswrapper[4900]: E1202 15:27:55.855811 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214af79c-cfe9-402c-952c-f266f24dc165" containerName="init" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.855817 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="214af79c-cfe9-402c-952c-f266f24dc165" containerName="init" Dec 02 15:27:55 crc kubenswrapper[4900]: E1202 15:27:55.855830 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="dnsmasq-dns" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.855838 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="dnsmasq-dns" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.856049 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="214af79c-cfe9-402c-952c-f266f24dc165" containerName="dnsmasq-dns" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.856074 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0613f269-0d01-4034-a25a-0128fe099674" containerName="dnsmasq-dns" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.856802 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.858969 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.858987 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.859576 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.860004 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.896807 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h"] Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.952420 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.952478 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.952879 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r54vq\" (UniqueName: \"kubernetes.io/projected/a6628b96-8eee-40d6-9219-3db27878b324-kube-api-access-r54vq\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.953023 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:55 crc kubenswrapper[4900]: I1202 15:27:55.953118 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.054715 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.054847 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.054893 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.054927 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.055061 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r54vq\" (UniqueName: \"kubernetes.io/projected/a6628b96-8eee-40d6-9219-3db27878b324-kube-api-access-r54vq\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.060636 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.060819 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.061308 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.063151 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.073442 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r54vq\" (UniqueName: \"kubernetes.io/projected/a6628b96-8eee-40d6-9219-3db27878b324-kube-api-access-r54vq\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.188737 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:27:56 crc kubenswrapper[4900]: W1202 15:27:56.793617 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6628b96_8eee_40d6_9219_3db27878b324.slice/crio-eab70b18671029e5453a72d4ed1899eea38e37aee18c87bd37fd0cdc0cf91ddb WatchSource:0}: Error finding container eab70b18671029e5453a72d4ed1899eea38e37aee18c87bd37fd0cdc0cf91ddb: Status 404 returned error can't find the container with id eab70b18671029e5453a72d4ed1899eea38e37aee18c87bd37fd0cdc0cf91ddb Dec 02 15:27:56 crc kubenswrapper[4900]: I1202 15:27:56.799792 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h"] Dec 02 15:27:57 crc kubenswrapper[4900]: I1202 15:27:57.752037 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" event={"ID":"a6628b96-8eee-40d6-9219-3db27878b324","Type":"ContainerStarted","Data":"eab70b18671029e5453a72d4ed1899eea38e37aee18c87bd37fd0cdc0cf91ddb"} Dec 02 15:28:05 crc kubenswrapper[4900]: I1202 15:28:05.842186 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" event={"ID":"a6628b96-8eee-40d6-9219-3db27878b324","Type":"ContainerStarted","Data":"1783183e4315298fc1c726abfa170b4bac9c53d86cf57834125232c942636855"} Dec 02 15:28:05 crc kubenswrapper[4900]: I1202 15:28:05.876930 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" podStartSLOduration=2.273634581 podStartE2EDuration="10.87690606s" podCreationTimestamp="2025-12-02 15:27:55 +0000 UTC" firstStartedPulling="2025-12-02 15:27:56.796283883 +0000 UTC m=+6322.212097734" lastFinishedPulling="2025-12-02 15:28:05.399555362 +0000 UTC m=+6330.815369213" observedRunningTime="2025-12-02 15:28:05.865997053 +0000 UTC m=+6331.281810914" watchObservedRunningTime="2025-12-02 15:28:05.87690606 +0000 UTC m=+6331.292719931" Dec 02 15:28:11 crc kubenswrapper[4900]: I1202 15:28:11.672975 4900 scope.go:117] "RemoveContainer" containerID="ae880ad5056798052776a59981c137487fc399982a3a73a269c865600bbca156" Dec 02 15:28:11 crc kubenswrapper[4900]: I1202 15:28:11.718593 4900 scope.go:117] "RemoveContainer" containerID="a13e302e872bf70ff8b0de2dd5562eb90ae71d4b18cdf6b02368d4ecaa94595a" Dec 02 15:28:11 crc kubenswrapper[4900]: I1202 15:28:11.767885 4900 scope.go:117] "RemoveContainer" containerID="fe63642aece313d3acd628cdeb5c5ae7494a8c3bf7e9c2bfe853f18445e03af3" Dec 02 15:28:15 crc kubenswrapper[4900]: I1202 15:28:15.116771 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:28:15 crc kubenswrapper[4900]: I1202 15:28:15.117890 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.203981 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6gg5"] Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.207416 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.237234 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6gg5"] Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.301007 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-catalog-content\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.301072 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqrj\" (UniqueName: \"kubernetes.io/projected/169b87e5-4976-4651-8026-c97c03a14402-kube-api-access-6wqrj\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.301610 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-utilities\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.403573 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-utilities\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.403693 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-catalog-content\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.403751 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wqrj\" (UniqueName: \"kubernetes.io/projected/169b87e5-4976-4651-8026-c97c03a14402-kube-api-access-6wqrj\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.404294 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-utilities\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.404358 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-catalog-content\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.428824 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wqrj\" (UniqueName: \"kubernetes.io/projected/169b87e5-4976-4651-8026-c97c03a14402-kube-api-access-6wqrj\") pod \"certified-operators-k6gg5\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:18 crc kubenswrapper[4900]: I1202 15:28:18.530998 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:19 crc kubenswrapper[4900]: I1202 15:28:19.019479 4900 generic.go:334] "Generic (PLEG): container finished" podID="a6628b96-8eee-40d6-9219-3db27878b324" containerID="1783183e4315298fc1c726abfa170b4bac9c53d86cf57834125232c942636855" exitCode=0 Dec 02 15:28:19 crc kubenswrapper[4900]: I1202 15:28:19.019556 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" event={"ID":"a6628b96-8eee-40d6-9219-3db27878b324","Type":"ContainerDied","Data":"1783183e4315298fc1c726abfa170b4bac9c53d86cf57834125232c942636855"} Dec 02 15:28:19 crc kubenswrapper[4900]: W1202 15:28:19.112237 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod169b87e5_4976_4651_8026_c97c03a14402.slice/crio-7164f33d3c7185f26ee078c36458a6cae84dad078b701278c36cef570ac6b4be WatchSource:0}: Error finding container 7164f33d3c7185f26ee078c36458a6cae84dad078b701278c36cef570ac6b4be: Status 404 returned error can't find the container with id 7164f33d3c7185f26ee078c36458a6cae84dad078b701278c36cef570ac6b4be Dec 02 15:28:19 crc kubenswrapper[4900]: I1202 15:28:19.130771 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6gg5"] Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.032626 4900 generic.go:334] "Generic (PLEG): container finished" podID="169b87e5-4976-4651-8026-c97c03a14402" containerID="29b6e2ef0316826966f29739acfe6b2b0aa85a6a066e47be9e467ffb611a8004" exitCode=0 Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.032720 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerDied","Data":"29b6e2ef0316826966f29739acfe6b2b0aa85a6a066e47be9e467ffb611a8004"} Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.033190 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerStarted","Data":"7164f33d3c7185f26ee078c36458a6cae84dad078b701278c36cef570ac6b4be"} Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.035870 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.575177 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.620439 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lhjbj"] Dec 02 15:28:20 crc kubenswrapper[4900]: E1202 15:28:20.621467 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6628b96-8eee-40d6-9219-3db27878b324" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.621510 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6628b96-8eee-40d6-9219-3db27878b324" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.621916 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6628b96-8eee-40d6-9219-3db27878b324" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.625407 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.642587 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhjbj"] Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.652407 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ssh-key\") pod \"a6628b96-8eee-40d6-9219-3db27878b324\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.652536 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-inventory\") pod \"a6628b96-8eee-40d6-9219-3db27878b324\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.652666 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r54vq\" (UniqueName: \"kubernetes.io/projected/a6628b96-8eee-40d6-9219-3db27878b324-kube-api-access-r54vq\") pod \"a6628b96-8eee-40d6-9219-3db27878b324\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.652783 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-pre-adoption-validation-combined-ca-bundle\") pod \"a6628b96-8eee-40d6-9219-3db27878b324\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.652929 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ceph\") pod \"a6628b96-8eee-40d6-9219-3db27878b324\" (UID: \"a6628b96-8eee-40d6-9219-3db27878b324\") " Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.660893 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ceph" (OuterVolumeSpecName: "ceph") pod "a6628b96-8eee-40d6-9219-3db27878b324" (UID: "a6628b96-8eee-40d6-9219-3db27878b324"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.661234 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "a6628b96-8eee-40d6-9219-3db27878b324" (UID: "a6628b96-8eee-40d6-9219-3db27878b324"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.669108 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6628b96-8eee-40d6-9219-3db27878b324-kube-api-access-r54vq" (OuterVolumeSpecName: "kube-api-access-r54vq") pod "a6628b96-8eee-40d6-9219-3db27878b324" (UID: "a6628b96-8eee-40d6-9219-3db27878b324"). InnerVolumeSpecName "kube-api-access-r54vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.698443 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-inventory" (OuterVolumeSpecName: "inventory") pod "a6628b96-8eee-40d6-9219-3db27878b324" (UID: "a6628b96-8eee-40d6-9219-3db27878b324"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.699978 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6628b96-8eee-40d6-9219-3db27878b324" (UID: "a6628b96-8eee-40d6-9219-3db27878b324"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755053 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-catalog-content\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755227 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-utilities\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755275 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwc5\" (UniqueName: \"kubernetes.io/projected/db1d7fe9-049b-4a54-bae2-795b9a066b16-kube-api-access-cqwc5\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755339 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755350 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755358 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755367 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r54vq\" (UniqueName: \"kubernetes.io/projected/a6628b96-8eee-40d6-9219-3db27878b324-kube-api-access-r54vq\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.755376 4900 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6628b96-8eee-40d6-9219-3db27878b324-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.857167 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-utilities\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.857237 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwc5\" (UniqueName: \"kubernetes.io/projected/db1d7fe9-049b-4a54-bae2-795b9a066b16-kube-api-access-cqwc5\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.857293 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-catalog-content\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.857725 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-utilities\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.857784 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-catalog-content\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.877213 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwc5\" (UniqueName: \"kubernetes.io/projected/db1d7fe9-049b-4a54-bae2-795b9a066b16-kube-api-access-cqwc5\") pod \"community-operators-lhjbj\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:20 crc kubenswrapper[4900]: I1202 15:28:20.956822 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:21 crc kubenswrapper[4900]: I1202 15:28:21.050178 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" event={"ID":"a6628b96-8eee-40d6-9219-3db27878b324","Type":"ContainerDied","Data":"eab70b18671029e5453a72d4ed1899eea38e37aee18c87bd37fd0cdc0cf91ddb"} Dec 02 15:28:21 crc kubenswrapper[4900]: I1202 15:28:21.050211 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab70b18671029e5453a72d4ed1899eea38e37aee18c87bd37fd0cdc0cf91ddb" Dec 02 15:28:21 crc kubenswrapper[4900]: I1202 15:28:21.050228 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h" Dec 02 15:28:21 crc kubenswrapper[4900]: I1202 15:28:21.509460 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhjbj"] Dec 02 15:28:22 crc kubenswrapper[4900]: I1202 15:28:22.065584 4900 generic.go:334] "Generic (PLEG): container finished" podID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerID="10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2" exitCode=0 Dec 02 15:28:22 crc kubenswrapper[4900]: I1202 15:28:22.065684 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerDied","Data":"10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2"} Dec 02 15:28:22 crc kubenswrapper[4900]: I1202 15:28:22.065712 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerStarted","Data":"70409f6c4aea7c1acc5e52984934f5d5b725b97337a830750f7ebd0e7e6e58eb"} Dec 02 15:28:22 crc kubenswrapper[4900]: I1202 15:28:22.067580 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerStarted","Data":"dabb1cbcb606142a033c0af31035f0a8ac18dc75361467a2758138d4667781aa"} Dec 02 15:28:23 crc kubenswrapper[4900]: I1202 15:28:23.082105 4900 generic.go:334] "Generic (PLEG): container finished" podID="169b87e5-4976-4651-8026-c97c03a14402" containerID="dabb1cbcb606142a033c0af31035f0a8ac18dc75361467a2758138d4667781aa" exitCode=0 Dec 02 15:28:23 crc kubenswrapper[4900]: I1202 15:28:23.082192 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerDied","Data":"dabb1cbcb606142a033c0af31035f0a8ac18dc75361467a2758138d4667781aa"} Dec 02 15:28:24 crc kubenswrapper[4900]: I1202 15:28:24.094807 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerStarted","Data":"4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4"} Dec 02 15:28:25 crc kubenswrapper[4900]: I1202 15:28:25.110437 4900 generic.go:334] "Generic (PLEG): container finished" podID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerID="4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4" exitCode=0 Dec 02 15:28:25 crc kubenswrapper[4900]: I1202 15:28:25.111115 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerDied","Data":"4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4"} Dec 02 15:28:25 crc kubenswrapper[4900]: I1202 15:28:25.116167 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerStarted","Data":"089d170a8eb5070bf186b8f8abba304e5daf8112f6401494adfa26f385f1461c"} Dec 02 15:28:25 crc kubenswrapper[4900]: I1202 15:28:25.168046 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6gg5" podStartSLOduration=3.586502298 podStartE2EDuration="7.16801347s" podCreationTimestamp="2025-12-02 15:28:18 +0000 UTC" firstStartedPulling="2025-12-02 15:28:20.035304211 +0000 UTC m=+6345.451118102" lastFinishedPulling="2025-12-02 15:28:23.616815423 +0000 UTC m=+6349.032629274" observedRunningTime="2025-12-02 15:28:25.160507589 +0000 UTC m=+6350.576321490" watchObservedRunningTime="2025-12-02 15:28:25.16801347 +0000 UTC m=+6350.583827351" Dec 02 15:28:26 crc kubenswrapper[4900]: I1202 15:28:26.130280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerStarted","Data":"6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f"} Dec 02 15:28:26 crc kubenswrapper[4900]: I1202 15:28:26.158307 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lhjbj" podStartSLOduration=2.45815717 podStartE2EDuration="6.158282528s" podCreationTimestamp="2025-12-02 15:28:20 +0000 UTC" firstStartedPulling="2025-12-02 15:28:22.067216972 +0000 UTC m=+6347.483030823" lastFinishedPulling="2025-12-02 15:28:25.76734233 +0000 UTC m=+6351.183156181" observedRunningTime="2025-12-02 15:28:26.145820557 +0000 UTC m=+6351.561634428" watchObservedRunningTime="2025-12-02 15:28:26.158282528 +0000 UTC m=+6351.574096389" Dec 02 15:28:28 crc kubenswrapper[4900]: I1202 15:28:28.532356 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:28 crc kubenswrapper[4900]: I1202 15:28:28.533032 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:28 crc kubenswrapper[4900]: I1202 15:28:28.582225 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.039886 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg"] Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.041904 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.047708 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.048251 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.048620 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.048985 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.105264 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg"] Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.140209 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.140307 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rl4\" (UniqueName: \"kubernetes.io/projected/8a8021fa-4038-4f47-ac57-f800a48e293a-kube-api-access-m6rl4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.140369 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.140396 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.140445 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.217972 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.242755 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rl4\" (UniqueName: \"kubernetes.io/projected/8a8021fa-4038-4f47-ac57-f800a48e293a-kube-api-access-m6rl4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.242833 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.242853 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.242892 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.243039 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.249243 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.249908 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.256013 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.262569 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rl4\" (UniqueName: \"kubernetes.io/projected/8a8021fa-4038-4f47-ac57-f800a48e293a-kube-api-access-m6rl4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.267352 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.382289 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:28:29 crc kubenswrapper[4900]: I1202 15:28:29.961778 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg"] Dec 02 15:28:30 crc kubenswrapper[4900]: I1202 15:28:30.173984 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" event={"ID":"8a8021fa-4038-4f47-ac57-f800a48e293a","Type":"ContainerStarted","Data":"48a5d182ac79a4352e952c497b9af9b54b55bd45760f489db27a7d3fd4cdde29"} Dec 02 15:28:30 crc kubenswrapper[4900]: I1202 15:28:30.957941 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:30 crc kubenswrapper[4900]: I1202 15:28:30.958311 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:31 crc kubenswrapper[4900]: I1202 15:28:31.009854 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:31 crc kubenswrapper[4900]: I1202 15:28:31.191555 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" event={"ID":"8a8021fa-4038-4f47-ac57-f800a48e293a","Type":"ContainerStarted","Data":"f02c4dcac42079777dee927eb0b93d9d5f9fb792299a6062f909cf9b9cf4f681"} Dec 02 15:28:31 crc kubenswrapper[4900]: I1202 15:28:31.266585 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:31 crc kubenswrapper[4900]: I1202 15:28:31.293694 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" podStartSLOduration=1.846255786 podStartE2EDuration="2.293671401s" podCreationTimestamp="2025-12-02 15:28:29 +0000 UTC" firstStartedPulling="2025-12-02 15:28:29.971155658 +0000 UTC m=+6355.386969509" lastFinishedPulling="2025-12-02 15:28:30.418571273 +0000 UTC m=+6355.834385124" observedRunningTime="2025-12-02 15:28:31.221145251 +0000 UTC m=+6356.636959102" watchObservedRunningTime="2025-12-02 15:28:31.293671401 +0000 UTC m=+6356.709485262" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.008970 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6gg5"] Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.009209 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6gg5" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="registry-server" containerID="cri-o://089d170a8eb5070bf186b8f8abba304e5daf8112f6401494adfa26f385f1461c" gracePeriod=2 Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.205857 4900 generic.go:334] "Generic (PLEG): container finished" podID="169b87e5-4976-4651-8026-c97c03a14402" containerID="089d170a8eb5070bf186b8f8abba304e5daf8112f6401494adfa26f385f1461c" exitCode=0 Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.207016 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerDied","Data":"089d170a8eb5070bf186b8f8abba304e5daf8112f6401494adfa26f385f1461c"} Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.623571 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.749239 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-utilities\") pod \"169b87e5-4976-4651-8026-c97c03a14402\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.749359 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-catalog-content\") pod \"169b87e5-4976-4651-8026-c97c03a14402\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.749573 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wqrj\" (UniqueName: \"kubernetes.io/projected/169b87e5-4976-4651-8026-c97c03a14402-kube-api-access-6wqrj\") pod \"169b87e5-4976-4651-8026-c97c03a14402\" (UID: \"169b87e5-4976-4651-8026-c97c03a14402\") " Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.749864 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-utilities" (OuterVolumeSpecName: "utilities") pod "169b87e5-4976-4651-8026-c97c03a14402" (UID: "169b87e5-4976-4651-8026-c97c03a14402"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.750210 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.760027 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169b87e5-4976-4651-8026-c97c03a14402-kube-api-access-6wqrj" (OuterVolumeSpecName: "kube-api-access-6wqrj") pod "169b87e5-4976-4651-8026-c97c03a14402" (UID: "169b87e5-4976-4651-8026-c97c03a14402"). InnerVolumeSpecName "kube-api-access-6wqrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.802791 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "169b87e5-4976-4651-8026-c97c03a14402" (UID: "169b87e5-4976-4651-8026-c97c03a14402"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.851925 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b87e5-4976-4651-8026-c97c03a14402-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:32 crc kubenswrapper[4900]: I1202 15:28:32.851967 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wqrj\" (UniqueName: \"kubernetes.io/projected/169b87e5-4976-4651-8026-c97c03a14402-kube-api-access-6wqrj\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.221755 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6gg5" event={"ID":"169b87e5-4976-4651-8026-c97c03a14402","Type":"ContainerDied","Data":"7164f33d3c7185f26ee078c36458a6cae84dad078b701278c36cef570ac6b4be"} Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.221824 4900 scope.go:117] "RemoveContainer" containerID="089d170a8eb5070bf186b8f8abba304e5daf8112f6401494adfa26f385f1461c" Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.221921 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6gg5" Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.287845 4900 scope.go:117] "RemoveContainer" containerID="dabb1cbcb606142a033c0af31035f0a8ac18dc75361467a2758138d4667781aa" Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.289754 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6gg5"] Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.332549 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6gg5"] Dec 02 15:28:33 crc kubenswrapper[4900]: I1202 15:28:33.389289 4900 scope.go:117] "RemoveContainer" containerID="29b6e2ef0316826966f29739acfe6b2b0aa85a6a066e47be9e467ffb611a8004" Dec 02 15:28:34 crc kubenswrapper[4900]: I1202 15:28:34.595815 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhjbj"] Dec 02 15:28:34 crc kubenswrapper[4900]: I1202 15:28:34.596665 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lhjbj" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="registry-server" containerID="cri-o://6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f" gracePeriod=2 Dec 02 15:28:34 crc kubenswrapper[4900]: I1202 15:28:34.945028 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169b87e5-4976-4651-8026-c97c03a14402" path="/var/lib/kubelet/pods/169b87e5-4976-4651-8026-c97c03a14402/volumes" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.148626 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.207318 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-utilities\") pod \"db1d7fe9-049b-4a54-bae2-795b9a066b16\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.207688 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-catalog-content\") pod \"db1d7fe9-049b-4a54-bae2-795b9a066b16\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.207802 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwc5\" (UniqueName: \"kubernetes.io/projected/db1d7fe9-049b-4a54-bae2-795b9a066b16-kube-api-access-cqwc5\") pod \"db1d7fe9-049b-4a54-bae2-795b9a066b16\" (UID: \"db1d7fe9-049b-4a54-bae2-795b9a066b16\") " Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.208624 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-utilities" (OuterVolumeSpecName: "utilities") pod "db1d7fe9-049b-4a54-bae2-795b9a066b16" (UID: "db1d7fe9-049b-4a54-bae2-795b9a066b16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.231593 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1d7fe9-049b-4a54-bae2-795b9a066b16-kube-api-access-cqwc5" (OuterVolumeSpecName: "kube-api-access-cqwc5") pod "db1d7fe9-049b-4a54-bae2-795b9a066b16" (UID: "db1d7fe9-049b-4a54-bae2-795b9a066b16"). InnerVolumeSpecName "kube-api-access-cqwc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.273922 4900 generic.go:334] "Generic (PLEG): container finished" podID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerID="6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f" exitCode=0 Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.273971 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerDied","Data":"6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f"} Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.274005 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhjbj" event={"ID":"db1d7fe9-049b-4a54-bae2-795b9a066b16","Type":"ContainerDied","Data":"70409f6c4aea7c1acc5e52984934f5d5b725b97337a830750f7ebd0e7e6e58eb"} Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.274027 4900 scope.go:117] "RemoveContainer" containerID="6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.274199 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhjbj" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.279387 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db1d7fe9-049b-4a54-bae2-795b9a066b16" (UID: "db1d7fe9-049b-4a54-bae2-795b9a066b16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.307501 4900 scope.go:117] "RemoveContainer" containerID="4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.310095 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqwc5\" (UniqueName: \"kubernetes.io/projected/db1d7fe9-049b-4a54-bae2-795b9a066b16-kube-api-access-cqwc5\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.310130 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.310143 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1d7fe9-049b-4a54-bae2-795b9a066b16-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.336173 4900 scope.go:117] "RemoveContainer" containerID="10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.386712 4900 scope.go:117] "RemoveContainer" containerID="6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f" Dec 02 15:28:35 crc kubenswrapper[4900]: E1202 15:28:35.387234 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f\": container with ID starting with 6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f not found: ID does not exist" containerID="6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.387298 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f"} err="failed to get container status \"6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f\": rpc error: code = NotFound desc = could not find container \"6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f\": container with ID starting with 6fe83d1a2d9b3604133e4d94bbcc387f1519b724a72e353876da2a5dbc814a3f not found: ID does not exist" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.387332 4900 scope.go:117] "RemoveContainer" containerID="4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4" Dec 02 15:28:35 crc kubenswrapper[4900]: E1202 15:28:35.387681 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4\": container with ID starting with 4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4 not found: ID does not exist" containerID="4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.387713 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4"} err="failed to get container status \"4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4\": rpc error: code = NotFound desc = could not find container \"4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4\": container with ID starting with 4cc37f662c48412762447654f7551051f8141687eee88e4abb9ef4b67adb4bb4 not found: ID does not exist" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.387735 4900 scope.go:117] "RemoveContainer" containerID="10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2" Dec 02 15:28:35 crc kubenswrapper[4900]: E1202 15:28:35.388069 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2\": container with ID starting with 10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2 not found: ID does not exist" containerID="10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.388111 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2"} err="failed to get container status \"10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2\": rpc error: code = NotFound desc = could not find container \"10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2\": container with ID starting with 10f8f5f517b419ed5fc1f772a3bad338d378ea894970e2c9ad8775bdc67b86e2 not found: ID does not exist" Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.613533 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhjbj"] Dec 02 15:28:35 crc kubenswrapper[4900]: I1202 15:28:35.625138 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lhjbj"] Dec 02 15:28:36 crc kubenswrapper[4900]: I1202 15:28:36.932031 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" path="/var/lib/kubelet/pods/db1d7fe9-049b-4a54-bae2-795b9a066b16/volumes" Dec 02 15:28:45 crc kubenswrapper[4900]: I1202 15:28:45.116708 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:28:45 crc kubenswrapper[4900]: I1202 15:28:45.117355 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.116515 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.117178 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.117235 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.118293 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.118357 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" gracePeriod=600 Dec 02 15:29:15 crc kubenswrapper[4900]: E1202 15:29:15.248950 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.750075 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" exitCode=0 Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.750141 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3"} Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.750191 4900 scope.go:117] "RemoveContainer" containerID="5b89bde9e30bda55f0fc8913241034f85509c78b8e0ea65f7e6e475647a12267" Dec 02 15:29:15 crc kubenswrapper[4900]: I1202 15:29:15.751297 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:29:15 crc kubenswrapper[4900]: E1202 15:29:15.751811 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:29:30 crc kubenswrapper[4900]: I1202 15:29:30.910090 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:29:30 crc kubenswrapper[4900]: E1202 15:29:30.911056 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:29:44 crc kubenswrapper[4900]: I1202 15:29:44.921217 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:29:44 crc kubenswrapper[4900]: E1202 15:29:44.923222 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:29:55 crc kubenswrapper[4900]: I1202 15:29:55.911734 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:29:55 crc kubenswrapper[4900]: E1202 15:29:55.912604 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.187791 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl"] Dec 02 15:30:00 crc kubenswrapper[4900]: E1202 15:30:00.188799 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="extract-utilities" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.188813 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="extract-utilities" Dec 02 15:30:00 crc kubenswrapper[4900]: E1202 15:30:00.188836 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="extract-utilities" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.188842 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="extract-utilities" Dec 02 15:30:00 crc kubenswrapper[4900]: E1202 15:30:00.188856 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.188863 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4900]: E1202 15:30:00.188883 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="extract-content" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.188889 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="extract-content" Dec 02 15:30:00 crc kubenswrapper[4900]: E1202 15:30:00.188911 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.188917 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4900]: E1202 15:30:00.188927 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="extract-content" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.188933 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="extract-content" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.189129 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1d7fe9-049b-4a54-bae2-795b9a066b16" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.189138 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="169b87e5-4976-4651-8026-c97c03a14402" containerName="registry-server" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.189986 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.195161 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.195716 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.206993 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl"] Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.274019 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-secret-volume\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.274100 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lb9q\" (UniqueName: \"kubernetes.io/projected/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-kube-api-access-8lb9q\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.274131 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-config-volume\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.375910 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-secret-volume\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.376002 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lb9q\" (UniqueName: \"kubernetes.io/projected/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-kube-api-access-8lb9q\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.376034 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-config-volume\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.376977 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-config-volume\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.392314 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-secret-volume\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.398838 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lb9q\" (UniqueName: \"kubernetes.io/projected/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-kube-api-access-8lb9q\") pod \"collect-profiles-29411490-fb6zl\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.525597 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:00 crc kubenswrapper[4900]: I1202 15:30:00.981889 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl"] Dec 02 15:30:01 crc kubenswrapper[4900]: I1202 15:30:01.302914 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" event={"ID":"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826","Type":"ContainerStarted","Data":"7a1abc76993582bd8edc1075f75236b6cc494e3372fa5f49e60a8aca698d7d38"} Dec 02 15:30:01 crc kubenswrapper[4900]: I1202 15:30:01.303156 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" event={"ID":"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826","Type":"ContainerStarted","Data":"2b4ce8f1deb2ce1d7ebbc1ce410c5ae7f8a3eda9ffd776cfb64138be121de94f"} Dec 02 15:30:01 crc kubenswrapper[4900]: I1202 15:30:01.324017 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" podStartSLOduration=1.323996761 podStartE2EDuration="1.323996761s" podCreationTimestamp="2025-12-02 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:30:01.31648826 +0000 UTC m=+6446.732302121" watchObservedRunningTime="2025-12-02 15:30:01.323996761 +0000 UTC m=+6446.739810612" Dec 02 15:30:02 crc kubenswrapper[4900]: I1202 15:30:02.315055 4900 generic.go:334] "Generic (PLEG): container finished" podID="d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" containerID="7a1abc76993582bd8edc1075f75236b6cc494e3372fa5f49e60a8aca698d7d38" exitCode=0 Dec 02 15:30:02 crc kubenswrapper[4900]: I1202 15:30:02.315106 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" event={"ID":"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826","Type":"ContainerDied","Data":"7a1abc76993582bd8edc1075f75236b6cc494e3372fa5f49e60a8aca698d7d38"} Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.779962 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.851072 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-secret-volume\") pod \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.851188 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lb9q\" (UniqueName: \"kubernetes.io/projected/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-kube-api-access-8lb9q\") pod \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.851253 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-config-volume\") pod \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\" (UID: \"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826\") " Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.853067 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" (UID: "d8ab412e-e6b6-4de9-a4c0-e4d23fc46826"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.856791 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" (UID: "d8ab412e-e6b6-4de9-a4c0-e4d23fc46826"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.857104 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-kube-api-access-8lb9q" (OuterVolumeSpecName: "kube-api-access-8lb9q") pod "d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" (UID: "d8ab412e-e6b6-4de9-a4c0-e4d23fc46826"). InnerVolumeSpecName "kube-api-access-8lb9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.954582 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.954662 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:30:03 crc kubenswrapper[4900]: I1202 15:30:03.954685 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lb9q\" (UniqueName: \"kubernetes.io/projected/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826-kube-api-access-8lb9q\") on node \"crc\" DevicePath \"\"" Dec 02 15:30:04 crc kubenswrapper[4900]: I1202 15:30:04.342466 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" event={"ID":"d8ab412e-e6b6-4de9-a4c0-e4d23fc46826","Type":"ContainerDied","Data":"2b4ce8f1deb2ce1d7ebbc1ce410c5ae7f8a3eda9ffd776cfb64138be121de94f"} Dec 02 15:30:04 crc kubenswrapper[4900]: I1202 15:30:04.342916 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b4ce8f1deb2ce1d7ebbc1ce410c5ae7f8a3eda9ffd776cfb64138be121de94f" Dec 02 15:30:04 crc kubenswrapper[4900]: I1202 15:30:04.342550 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl" Dec 02 15:30:04 crc kubenswrapper[4900]: I1202 15:30:04.920988 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln"] Dec 02 15:30:04 crc kubenswrapper[4900]: I1202 15:30:04.922849 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-rttln"] Dec 02 15:30:06 crc kubenswrapper[4900]: I1202 15:30:06.924518 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed" path="/var/lib/kubelet/pods/ed982b4f-5c57-4e5d-bd1f-ce7887b5bbed/volumes" Dec 02 15:30:10 crc kubenswrapper[4900]: I1202 15:30:10.910659 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:30:10 crc kubenswrapper[4900]: E1202 15:30:10.911385 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:30:11 crc kubenswrapper[4900]: I1202 15:30:11.048558 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-47ngw"] Dec 02 15:30:11 crc kubenswrapper[4900]: I1202 15:30:11.075529 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-47ngw"] Dec 02 15:30:12 crc kubenswrapper[4900]: I1202 15:30:12.040470 4900 scope.go:117] "RemoveContainer" containerID="6b22e05c3fa923542344a5c19b5ff3cb1ce7d10f6d210af5ade8a393d93415e4" Dec 02 15:30:12 crc kubenswrapper[4900]: I1202 15:30:12.042606 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-c45b-account-create-update-jp8vb"] Dec 02 15:30:12 crc kubenswrapper[4900]: I1202 15:30:12.058167 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-c45b-account-create-update-jp8vb"] Dec 02 15:30:12 crc kubenswrapper[4900]: I1202 15:30:12.932391 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6576ef94-c3a0-4392-91d2-935f84cda6c5" path="/var/lib/kubelet/pods/6576ef94-c3a0-4392-91d2-935f84cda6c5/volumes" Dec 02 15:30:12 crc kubenswrapper[4900]: I1202 15:30:12.935416 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f51f9e-0c86-4624-9080-7638e95cf27e" path="/var/lib/kubelet/pods/f2f51f9e-0c86-4624-9080-7638e95cf27e/volumes" Dec 02 15:30:17 crc kubenswrapper[4900]: I1202 15:30:17.033546 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-8992z"] Dec 02 15:30:17 crc kubenswrapper[4900]: I1202 15:30:17.039757 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-8992z"] Dec 02 15:30:18 crc kubenswrapper[4900]: I1202 15:30:18.062934 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-27a3-account-create-update-fw4gf"] Dec 02 15:30:18 crc kubenswrapper[4900]: I1202 15:30:18.080149 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-27a3-account-create-update-fw4gf"] Dec 02 15:30:18 crc kubenswrapper[4900]: I1202 15:30:18.925976 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157d9be1-babc-42f1-81f4-affacc965d19" path="/var/lib/kubelet/pods/157d9be1-babc-42f1-81f4-affacc965d19/volumes" Dec 02 15:30:18 crc kubenswrapper[4900]: I1202 15:30:18.927241 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4953e279-5ba5-4fad-9b71-f4baa47a27c2" path="/var/lib/kubelet/pods/4953e279-5ba5-4fad-9b71-f4baa47a27c2/volumes" Dec 02 15:30:22 crc kubenswrapper[4900]: I1202 15:30:22.910133 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:30:22 crc kubenswrapper[4900]: E1202 15:30:22.911106 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:30:36 crc kubenswrapper[4900]: I1202 15:30:36.910837 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:30:36 crc kubenswrapper[4900]: E1202 15:30:36.912221 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:30:49 crc kubenswrapper[4900]: I1202 15:30:49.909857 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:30:49 crc kubenswrapper[4900]: E1202 15:30:49.910857 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:31:03 crc kubenswrapper[4900]: I1202 15:31:03.065103 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-kgltv"] Dec 02 15:31:03 crc kubenswrapper[4900]: I1202 15:31:03.080308 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-kgltv"] Dec 02 15:31:04 crc kubenswrapper[4900]: I1202 15:31:04.924611 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:31:04 crc kubenswrapper[4900]: E1202 15:31:04.933784 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:31:04 crc kubenswrapper[4900]: I1202 15:31:04.948891 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0779c245-beae-4f64-a9d5-c4ad61d6c1e4" path="/var/lib/kubelet/pods/0779c245-beae-4f64-a9d5-c4ad61d6c1e4/volumes" Dec 02 15:31:12 crc kubenswrapper[4900]: I1202 15:31:12.124600 4900 scope.go:117] "RemoveContainer" containerID="6318ce7888e8ccaf65eb158f0af633ffd162116b06fdd08c1e60eda48723f3de" Dec 02 15:31:12 crc kubenswrapper[4900]: I1202 15:31:12.151932 4900 scope.go:117] "RemoveContainer" containerID="ed4cc4901ae1e401b8bb8e05ac1d4e79935d8a4bd4b6219f79bc280f657e5b9a" Dec 02 15:31:12 crc kubenswrapper[4900]: I1202 15:31:12.216973 4900 scope.go:117] "RemoveContainer" containerID="b8a1704d7fef8b833276889b3bd53dcb0b87b67c70e2f595cbf0b2a4c5ecd51a" Dec 02 15:31:12 crc kubenswrapper[4900]: I1202 15:31:12.259543 4900 scope.go:117] "RemoveContainer" containerID="28dac865f8ba085fdf607d13dc692464463829cb5723c325d73af3ce03d1bb0a" Dec 02 15:31:12 crc kubenswrapper[4900]: I1202 15:31:12.294281 4900 scope.go:117] "RemoveContainer" containerID="b523f6e3b7077875d013004d2dc4e50898da8cf9da2a06fe1e615a36826a5b3e" Dec 02 15:31:12 crc kubenswrapper[4900]: I1202 15:31:12.344887 4900 scope.go:117] "RemoveContainer" containerID="30557a7ab552f8030f0c67948c719f3a618bb09a138d1bb8f7000c7b5c9f93d1" Dec 02 15:31:18 crc kubenswrapper[4900]: I1202 15:31:18.910766 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:31:18 crc kubenswrapper[4900]: E1202 15:31:18.911537 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:31:32 crc kubenswrapper[4900]: I1202 15:31:32.911387 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:31:32 crc kubenswrapper[4900]: E1202 15:31:32.921911 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:31:44 crc kubenswrapper[4900]: I1202 15:31:44.919485 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:31:44 crc kubenswrapper[4900]: E1202 15:31:44.920397 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:31:59 crc kubenswrapper[4900]: I1202 15:31:59.910909 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:31:59 crc kubenswrapper[4900]: E1202 15:31:59.912014 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:32:12 crc kubenswrapper[4900]: I1202 15:32:12.910818 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:32:12 crc kubenswrapper[4900]: E1202 15:32:12.912106 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:32:23 crc kubenswrapper[4900]: I1202 15:32:23.910801 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:32:23 crc kubenswrapper[4900]: E1202 15:32:23.911846 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:32:38 crc kubenswrapper[4900]: I1202 15:32:38.917266 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:32:38 crc kubenswrapper[4900]: E1202 15:32:38.919367 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:32:51 crc kubenswrapper[4900]: I1202 15:32:51.910813 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:32:51 crc kubenswrapper[4900]: E1202 15:32:51.911489 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:33:01 crc kubenswrapper[4900]: I1202 15:33:01.071515 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-84gn7"] Dec 02 15:33:01 crc kubenswrapper[4900]: I1202 15:33:01.089856 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-84gn7"] Dec 02 15:33:02 crc kubenswrapper[4900]: I1202 15:33:02.028799 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c5e2-account-create-update-npvr6"] Dec 02 15:33:02 crc kubenswrapper[4900]: I1202 15:33:02.056347 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c5e2-account-create-update-npvr6"] Dec 02 15:33:02 crc kubenswrapper[4900]: I1202 15:33:02.942475 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28103557-41db-491f-99c1-3f122972f6b9" path="/var/lib/kubelet/pods/28103557-41db-491f-99c1-3f122972f6b9/volumes" Dec 02 15:33:02 crc kubenswrapper[4900]: I1202 15:33:02.944469 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52132994-d9fc-4431-8497-a23f6c6dc7e5" path="/var/lib/kubelet/pods/52132994-d9fc-4431-8497-a23f6c6dc7e5/volumes" Dec 02 15:33:03 crc kubenswrapper[4900]: I1202 15:33:03.910922 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:33:03 crc kubenswrapper[4900]: E1202 15:33:03.911753 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.745909 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7vwhm"] Dec 02 15:33:06 crc kubenswrapper[4900]: E1202 15:33:06.746913 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" containerName="collect-profiles" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.746925 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" containerName="collect-profiles" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.747135 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" containerName="collect-profiles" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.748604 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.777503 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vwhm"] Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.923233 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-catalog-content\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.923297 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-utilities\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:06 crc kubenswrapper[4900]: I1202 15:33:06.923332 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k52r\" (UniqueName: \"kubernetes.io/projected/515acf0d-1e0c-4b44-a491-cf40a18e38f0-kube-api-access-2k52r\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.025472 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-utilities\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.025547 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k52r\" (UniqueName: \"kubernetes.io/projected/515acf0d-1e0c-4b44-a491-cf40a18e38f0-kube-api-access-2k52r\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.025828 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-catalog-content\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.026130 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-utilities\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.026229 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-catalog-content\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.065493 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k52r\" (UniqueName: \"kubernetes.io/projected/515acf0d-1e0c-4b44-a491-cf40a18e38f0-kube-api-access-2k52r\") pod \"redhat-marketplace-7vwhm\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.114764 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:07 crc kubenswrapper[4900]: I1202 15:33:07.619159 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vwhm"] Dec 02 15:33:08 crc kubenswrapper[4900]: I1202 15:33:08.619384 4900 generic.go:334] "Generic (PLEG): container finished" podID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerID="b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab" exitCode=0 Dec 02 15:33:08 crc kubenswrapper[4900]: I1202 15:33:08.619470 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vwhm" event={"ID":"515acf0d-1e0c-4b44-a491-cf40a18e38f0","Type":"ContainerDied","Data":"b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab"} Dec 02 15:33:08 crc kubenswrapper[4900]: I1202 15:33:08.619832 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vwhm" event={"ID":"515acf0d-1e0c-4b44-a491-cf40a18e38f0","Type":"ContainerStarted","Data":"921bb430d5e494aeb1a8299990b23b57b8802ecccef8ba1707df60458bbea7a3"} Dec 02 15:33:10 crc kubenswrapper[4900]: I1202 15:33:10.645426 4900 generic.go:334] "Generic (PLEG): container finished" podID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerID="bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976" exitCode=0 Dec 02 15:33:10 crc kubenswrapper[4900]: I1202 15:33:10.646001 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vwhm" event={"ID":"515acf0d-1e0c-4b44-a491-cf40a18e38f0","Type":"ContainerDied","Data":"bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976"} Dec 02 15:33:11 crc kubenswrapper[4900]: I1202 15:33:11.661654 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vwhm" event={"ID":"515acf0d-1e0c-4b44-a491-cf40a18e38f0","Type":"ContainerStarted","Data":"00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9"} Dec 02 15:33:11 crc kubenswrapper[4900]: I1202 15:33:11.692348 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7vwhm" podStartSLOduration=3.1588277160000002 podStartE2EDuration="5.692330336s" podCreationTimestamp="2025-12-02 15:33:06 +0000 UTC" firstStartedPulling="2025-12-02 15:33:08.621263884 +0000 UTC m=+6634.037077725" lastFinishedPulling="2025-12-02 15:33:11.154766494 +0000 UTC m=+6636.570580345" observedRunningTime="2025-12-02 15:33:11.685196896 +0000 UTC m=+6637.101010747" watchObservedRunningTime="2025-12-02 15:33:11.692330336 +0000 UTC m=+6637.108144187" Dec 02 15:33:12 crc kubenswrapper[4900]: I1202 15:33:12.486521 4900 scope.go:117] "RemoveContainer" containerID="33293103028a8ccc5ba7daf0de7d4c5449b481fa15763d86507046fbaf2864a2" Dec 02 15:33:12 crc kubenswrapper[4900]: I1202 15:33:12.518978 4900 scope.go:117] "RemoveContainer" containerID="2245a909a05736d67ddd2bdc28d65513fdc230abb893db3a2c9cb8c673a8d48c" Dec 02 15:33:15 crc kubenswrapper[4900]: I1202 15:33:15.912013 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:33:15 crc kubenswrapper[4900]: E1202 15:33:15.912991 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.027572 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-69fcs"] Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.036136 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-69fcs"] Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.115548 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.115604 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.166689 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.770660 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:17 crc kubenswrapper[4900]: I1202 15:33:17.819127 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vwhm"] Dec 02 15:33:18 crc kubenswrapper[4900]: I1202 15:33:18.923939 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8064cca9-ba2e-4f86-b4c2-bbe0c03f3587" path="/var/lib/kubelet/pods/8064cca9-ba2e-4f86-b4c2-bbe0c03f3587/volumes" Dec 02 15:33:19 crc kubenswrapper[4900]: I1202 15:33:19.745439 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7vwhm" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="registry-server" containerID="cri-o://00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9" gracePeriod=2 Dec 02 15:33:20 crc kubenswrapper[4900]: E1202 15:33:20.073718 4900 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.299766 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.318522 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-utilities\") pod \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.318829 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k52r\" (UniqueName: \"kubernetes.io/projected/515acf0d-1e0c-4b44-a491-cf40a18e38f0-kube-api-access-2k52r\") pod \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.318871 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-catalog-content\") pod \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\" (UID: \"515acf0d-1e0c-4b44-a491-cf40a18e38f0\") " Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.319623 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-utilities" (OuterVolumeSpecName: "utilities") pod "515acf0d-1e0c-4b44-a491-cf40a18e38f0" (UID: "515acf0d-1e0c-4b44-a491-cf40a18e38f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.342830 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "515acf0d-1e0c-4b44-a491-cf40a18e38f0" (UID: "515acf0d-1e0c-4b44-a491-cf40a18e38f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.345837 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515acf0d-1e0c-4b44-a491-cf40a18e38f0-kube-api-access-2k52r" (OuterVolumeSpecName: "kube-api-access-2k52r") pod "515acf0d-1e0c-4b44-a491-cf40a18e38f0" (UID: "515acf0d-1e0c-4b44-a491-cf40a18e38f0"). InnerVolumeSpecName "kube-api-access-2k52r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.422013 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.422051 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k52r\" (UniqueName: \"kubernetes.io/projected/515acf0d-1e0c-4b44-a491-cf40a18e38f0-kube-api-access-2k52r\") on node \"crc\" DevicePath \"\"" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.422064 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515acf0d-1e0c-4b44-a491-cf40a18e38f0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.759763 4900 generic.go:334] "Generic (PLEG): container finished" podID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerID="00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9" exitCode=0 Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.759826 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vwhm" event={"ID":"515acf0d-1e0c-4b44-a491-cf40a18e38f0","Type":"ContainerDied","Data":"00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9"} Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.759897 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vwhm" event={"ID":"515acf0d-1e0c-4b44-a491-cf40a18e38f0","Type":"ContainerDied","Data":"921bb430d5e494aeb1a8299990b23b57b8802ecccef8ba1707df60458bbea7a3"} Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.759939 4900 scope.go:117] "RemoveContainer" containerID="00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.760351 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vwhm" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.815899 4900 scope.go:117] "RemoveContainer" containerID="bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.816120 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vwhm"] Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.833429 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vwhm"] Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.859834 4900 scope.go:117] "RemoveContainer" containerID="b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.911535 4900 scope.go:117] "RemoveContainer" containerID="00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9" Dec 02 15:33:20 crc kubenswrapper[4900]: E1202 15:33:20.912029 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9\": container with ID starting with 00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9 not found: ID does not exist" containerID="00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.912086 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9"} err="failed to get container status \"00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9\": rpc error: code = NotFound desc = could not find container \"00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9\": container with ID starting with 00891045ca5e99274b151ad473e131f0572b212d411a778f200ee0306e1140e9 not found: ID does not exist" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.912120 4900 scope.go:117] "RemoveContainer" containerID="bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976" Dec 02 15:33:20 crc kubenswrapper[4900]: E1202 15:33:20.912489 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976\": container with ID starting with bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976 not found: ID does not exist" containerID="bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.912529 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976"} err="failed to get container status \"bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976\": rpc error: code = NotFound desc = could not find container \"bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976\": container with ID starting with bd521fa7d619d53399ad2264f55e0ee72ec1c1e33780580eaaa4135766183976 not found: ID does not exist" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.912554 4900 scope.go:117] "RemoveContainer" containerID="b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab" Dec 02 15:33:20 crc kubenswrapper[4900]: E1202 15:33:20.912829 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab\": container with ID starting with b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab not found: ID does not exist" containerID="b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.912866 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab"} err="failed to get container status \"b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab\": rpc error: code = NotFound desc = could not find container \"b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab\": container with ID starting with b467fcf46b2980eada37e9f90df6ba66e577e3b00c2cb020a319e06f798186ab not found: ID does not exist" Dec 02 15:33:20 crc kubenswrapper[4900]: I1202 15:33:20.924381 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" path="/var/lib/kubelet/pods/515acf0d-1e0c-4b44-a491-cf40a18e38f0/volumes" Dec 02 15:33:29 crc kubenswrapper[4900]: I1202 15:33:29.910115 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:33:29 crc kubenswrapper[4900]: E1202 15:33:29.910870 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:33:40 crc kubenswrapper[4900]: I1202 15:33:40.911172 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:33:40 crc kubenswrapper[4900]: E1202 15:33:40.912032 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:33:53 crc kubenswrapper[4900]: I1202 15:33:53.910796 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:33:53 crc kubenswrapper[4900]: E1202 15:33:53.911751 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:34:08 crc kubenswrapper[4900]: I1202 15:34:08.911262 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:34:08 crc kubenswrapper[4900]: E1202 15:34:08.912354 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:34:12 crc kubenswrapper[4900]: I1202 15:34:12.662907 4900 scope.go:117] "RemoveContainer" containerID="4156734e4831055575ff3bec563a474c6ac6c35fa2b97e4cfcaa6c896586337d" Dec 02 15:34:22 crc kubenswrapper[4900]: I1202 15:34:22.910352 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:34:23 crc kubenswrapper[4900]: I1202 15:34:23.553837 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"bf3982dac883d0577245bf4c2431159618a9db4dcbcbb5bac899375abccdc166"} Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.788955 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jdgz"] Dec 02 15:35:22 crc kubenswrapper[4900]: E1202 15:35:22.789955 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="extract-utilities" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.789970 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="extract-utilities" Dec 02 15:35:22 crc kubenswrapper[4900]: E1202 15:35:22.789991 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="extract-content" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.789999 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="extract-content" Dec 02 15:35:22 crc kubenswrapper[4900]: E1202 15:35:22.790024 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="registry-server" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.790032 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="registry-server" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.790325 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="515acf0d-1e0c-4b44-a491-cf40a18e38f0" containerName="registry-server" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.792370 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.805219 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jdgz"] Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.831504 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-utilities\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.831600 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs4s\" (UniqueName: \"kubernetes.io/projected/dbccafaf-872e-4dae-93cc-a466739d6cc4-kube-api-access-wfs4s\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.831785 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-catalog-content\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.934592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfs4s\" (UniqueName: \"kubernetes.io/projected/dbccafaf-872e-4dae-93cc-a466739d6cc4-kube-api-access-wfs4s\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.934792 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-catalog-content\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.934981 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-utilities\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.936050 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-catalog-content\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.942982 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-utilities\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:22 crc kubenswrapper[4900]: I1202 15:35:22.965797 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfs4s\" (UniqueName: \"kubernetes.io/projected/dbccafaf-872e-4dae-93cc-a466739d6cc4-kube-api-access-wfs4s\") pod \"redhat-operators-6jdgz\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:23 crc kubenswrapper[4900]: I1202 15:35:23.156101 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:23 crc kubenswrapper[4900]: I1202 15:35:23.636637 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jdgz"] Dec 02 15:35:24 crc kubenswrapper[4900]: I1202 15:35:24.233097 4900 generic.go:334] "Generic (PLEG): container finished" podID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerID="9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5" exitCode=0 Dec 02 15:35:24 crc kubenswrapper[4900]: I1202 15:35:24.233147 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerDied","Data":"9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5"} Dec 02 15:35:24 crc kubenswrapper[4900]: I1202 15:35:24.233374 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerStarted","Data":"8592604759b0b8415c57634a429004c5916ecc321a178e56006d4dce73ca6b59"} Dec 02 15:35:24 crc kubenswrapper[4900]: I1202 15:35:24.236365 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:35:25 crc kubenswrapper[4900]: I1202 15:35:25.246687 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerStarted","Data":"aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1"} Dec 02 15:35:27 crc kubenswrapper[4900]: I1202 15:35:27.046440 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2ff6-account-create-update-sldkc"] Dec 02 15:35:27 crc kubenswrapper[4900]: I1202 15:35:27.062356 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-wp7kl"] Dec 02 15:35:27 crc kubenswrapper[4900]: I1202 15:35:27.078035 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-wp7kl"] Dec 02 15:35:27 crc kubenswrapper[4900]: I1202 15:35:27.101506 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2ff6-account-create-update-sldkc"] Dec 02 15:35:28 crc kubenswrapper[4900]: I1202 15:35:28.929353 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894c30d1-7f8c-4440-bc50-6531ede27372" path="/var/lib/kubelet/pods/894c30d1-7f8c-4440-bc50-6531ede27372/volumes" Dec 02 15:35:28 crc kubenswrapper[4900]: I1202 15:35:28.930472 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966744da-bd39-4cc3-8fc1-b8e2b66f1499" path="/var/lib/kubelet/pods/966744da-bd39-4cc3-8fc1-b8e2b66f1499/volumes" Dec 02 15:35:29 crc kubenswrapper[4900]: I1202 15:35:29.295363 4900 generic.go:334] "Generic (PLEG): container finished" podID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerID="aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1" exitCode=0 Dec 02 15:35:29 crc kubenswrapper[4900]: I1202 15:35:29.295435 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerDied","Data":"aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1"} Dec 02 15:35:31 crc kubenswrapper[4900]: I1202 15:35:31.325229 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerStarted","Data":"789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0"} Dec 02 15:35:31 crc kubenswrapper[4900]: I1202 15:35:31.353935 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jdgz" podStartSLOduration=3.4779825300000002 podStartE2EDuration="9.353912436s" podCreationTimestamp="2025-12-02 15:35:22 +0000 UTC" firstStartedPulling="2025-12-02 15:35:24.236165406 +0000 UTC m=+6769.651979257" lastFinishedPulling="2025-12-02 15:35:30.112095292 +0000 UTC m=+6775.527909163" observedRunningTime="2025-12-02 15:35:31.351632352 +0000 UTC m=+6776.767446243" watchObservedRunningTime="2025-12-02 15:35:31.353912436 +0000 UTC m=+6776.769726287" Dec 02 15:35:33 crc kubenswrapper[4900]: I1202 15:35:33.156919 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:33 crc kubenswrapper[4900]: I1202 15:35:33.157735 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:34 crc kubenswrapper[4900]: I1202 15:35:34.223224 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jdgz" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="registry-server" probeResult="failure" output=< Dec 02 15:35:34 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 15:35:34 crc kubenswrapper[4900]: > Dec 02 15:35:40 crc kubenswrapper[4900]: I1202 15:35:40.045616 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-wrwr2"] Dec 02 15:35:40 crc kubenswrapper[4900]: I1202 15:35:40.057711 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-wrwr2"] Dec 02 15:35:40 crc kubenswrapper[4900]: I1202 15:35:40.929297 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf12d29-7078-4a8c-b1d9-ac6b1d331056" path="/var/lib/kubelet/pods/bdf12d29-7078-4a8c-b1d9-ac6b1d331056/volumes" Dec 02 15:35:43 crc kubenswrapper[4900]: I1202 15:35:43.251379 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:43 crc kubenswrapper[4900]: I1202 15:35:43.343955 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:43 crc kubenswrapper[4900]: I1202 15:35:43.502496 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jdgz"] Dec 02 15:35:44 crc kubenswrapper[4900]: I1202 15:35:44.473602 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6jdgz" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="registry-server" containerID="cri-o://789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0" gracePeriod=2 Dec 02 15:35:44 crc kubenswrapper[4900]: I1202 15:35:44.974507 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.137290 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfs4s\" (UniqueName: \"kubernetes.io/projected/dbccafaf-872e-4dae-93cc-a466739d6cc4-kube-api-access-wfs4s\") pod \"dbccafaf-872e-4dae-93cc-a466739d6cc4\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.137653 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-utilities\") pod \"dbccafaf-872e-4dae-93cc-a466739d6cc4\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.137745 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-catalog-content\") pod \"dbccafaf-872e-4dae-93cc-a466739d6cc4\" (UID: \"dbccafaf-872e-4dae-93cc-a466739d6cc4\") " Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.139016 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-utilities" (OuterVolumeSpecName: "utilities") pod "dbccafaf-872e-4dae-93cc-a466739d6cc4" (UID: "dbccafaf-872e-4dae-93cc-a466739d6cc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.143904 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbccafaf-872e-4dae-93cc-a466739d6cc4-kube-api-access-wfs4s" (OuterVolumeSpecName: "kube-api-access-wfs4s") pod "dbccafaf-872e-4dae-93cc-a466739d6cc4" (UID: "dbccafaf-872e-4dae-93cc-a466739d6cc4"). InnerVolumeSpecName "kube-api-access-wfs4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.239432 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfs4s\" (UniqueName: \"kubernetes.io/projected/dbccafaf-872e-4dae-93cc-a466739d6cc4-kube-api-access-wfs4s\") on node \"crc\" DevicePath \"\"" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.239474 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.251827 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbccafaf-872e-4dae-93cc-a466739d6cc4" (UID: "dbccafaf-872e-4dae-93cc-a466739d6cc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.341633 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbccafaf-872e-4dae-93cc-a466739d6cc4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.489425 4900 generic.go:334] "Generic (PLEG): container finished" podID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerID="789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0" exitCode=0 Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.489475 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerDied","Data":"789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0"} Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.489505 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jdgz" event={"ID":"dbccafaf-872e-4dae-93cc-a466739d6cc4","Type":"ContainerDied","Data":"8592604759b0b8415c57634a429004c5916ecc321a178e56006d4dce73ca6b59"} Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.489525 4900 scope.go:117] "RemoveContainer" containerID="789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.489720 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jdgz" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.518948 4900 scope.go:117] "RemoveContainer" containerID="aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.539440 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jdgz"] Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.550043 4900 scope.go:117] "RemoveContainer" containerID="9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.552014 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6jdgz"] Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.599481 4900 scope.go:117] "RemoveContainer" containerID="789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0" Dec 02 15:35:45 crc kubenswrapper[4900]: E1202 15:35:45.599942 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0\": container with ID starting with 789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0 not found: ID does not exist" containerID="789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.599983 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0"} err="failed to get container status \"789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0\": rpc error: code = NotFound desc = could not find container \"789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0\": container with ID starting with 789e9ba23ccd7cccd8c1f1308147c52324f31545b389b19ff25ba71a322d59c0 not found: ID does not exist" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.600005 4900 scope.go:117] "RemoveContainer" containerID="aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1" Dec 02 15:35:45 crc kubenswrapper[4900]: E1202 15:35:45.600219 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1\": container with ID starting with aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1 not found: ID does not exist" containerID="aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.600246 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1"} err="failed to get container status \"aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1\": rpc error: code = NotFound desc = could not find container \"aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1\": container with ID starting with aa65fc103713bf9e4aa73268e41839a8f3e5615a643cdd713f8b6ed3dbeabdc1 not found: ID does not exist" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.600262 4900 scope.go:117] "RemoveContainer" containerID="9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5" Dec 02 15:35:45 crc kubenswrapper[4900]: E1202 15:35:45.600513 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5\": container with ID starting with 9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5 not found: ID does not exist" containerID="9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5" Dec 02 15:35:45 crc kubenswrapper[4900]: I1202 15:35:45.600546 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5"} err="failed to get container status \"9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5\": rpc error: code = NotFound desc = could not find container \"9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5\": container with ID starting with 9b4db5e7ea6ad96f69ceb6baac5c10178d03ec4d581bcd71a8a3cbc979d00aa5 not found: ID does not exist" Dec 02 15:35:46 crc kubenswrapper[4900]: I1202 15:35:46.927422 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" path="/var/lib/kubelet/pods/dbccafaf-872e-4dae-93cc-a466739d6cc4/volumes" Dec 02 15:36:01 crc kubenswrapper[4900]: I1202 15:36:01.050794 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-93a5-account-create-update-486c7"] Dec 02 15:36:01 crc kubenswrapper[4900]: I1202 15:36:01.065604 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-jz9pz"] Dec 02 15:36:01 crc kubenswrapper[4900]: I1202 15:36:01.080948 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-93a5-account-create-update-486c7"] Dec 02 15:36:01 crc kubenswrapper[4900]: I1202 15:36:01.091937 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-jz9pz"] Dec 02 15:36:02 crc kubenswrapper[4900]: I1202 15:36:02.934634 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3886a21d-e294-4785-96b5-349d2ac2806e" path="/var/lib/kubelet/pods/3886a21d-e294-4785-96b5-349d2ac2806e/volumes" Dec 02 15:36:02 crc kubenswrapper[4900]: I1202 15:36:02.936775 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0897af7-7309-43b3-b1ff-f3097209a5eb" path="/var/lib/kubelet/pods/c0897af7-7309-43b3-b1ff-f3097209a5eb/volumes" Dec 02 15:36:12 crc kubenswrapper[4900]: I1202 15:36:12.776814 4900 scope.go:117] "RemoveContainer" containerID="e4194303e41fa9711b8777e540eb46bfaa738962f2c95061c242bfa838bbe5cf" Dec 02 15:36:12 crc kubenswrapper[4900]: I1202 15:36:12.817021 4900 scope.go:117] "RemoveContainer" containerID="e69c7a7eb83a480f66909df76151c9a7e58b11695542e7d60a494f99bd4c22bd" Dec 02 15:36:12 crc kubenswrapper[4900]: I1202 15:36:12.879608 4900 scope.go:117] "RemoveContainer" containerID="a77848aaf977bab0eb60a22b2edb155097f917dc5b900e8f794b3ace5422c553" Dec 02 15:36:12 crc kubenswrapper[4900]: I1202 15:36:12.964285 4900 scope.go:117] "RemoveContainer" containerID="85fdb82d7586149cbc8b595e3665b41dca9053f6febe467b2ce309ed1f76ff37" Dec 02 15:36:12 crc kubenswrapper[4900]: I1202 15:36:12.993453 4900 scope.go:117] "RemoveContainer" containerID="215909d9f8ab7fb787239e3fdcfe70742129381a322b3856e9f6cfe254821da7" Dec 02 15:36:13 crc kubenswrapper[4900]: I1202 15:36:13.036142 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-4622t"] Dec 02 15:36:13 crc kubenswrapper[4900]: I1202 15:36:13.047174 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-4622t"] Dec 02 15:36:14 crc kubenswrapper[4900]: I1202 15:36:14.929714 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719b71e7-a7e7-4348-8ef3-b5a3594791e7" path="/var/lib/kubelet/pods/719b71e7-a7e7-4348-8ef3-b5a3594791e7/volumes" Dec 02 15:36:45 crc kubenswrapper[4900]: I1202 15:36:45.116280 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:36:45 crc kubenswrapper[4900]: I1202 15:36:45.116929 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:37:13 crc kubenswrapper[4900]: I1202 15:37:13.159680 4900 scope.go:117] "RemoveContainer" containerID="41dc13f0cef977cbfcb4826a230158f95afe42f33bde3c5a567e66c1aa6022fb" Dec 02 15:37:15 crc kubenswrapper[4900]: I1202 15:37:15.116413 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:37:15 crc kubenswrapper[4900]: I1202 15:37:15.117247 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.116901 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.117478 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.117521 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.118902 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf3982dac883d0577245bf4c2431159618a9db4dcbcbb5bac899375abccdc166"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.118984 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://bf3982dac883d0577245bf4c2431159618a9db4dcbcbb5bac899375abccdc166" gracePeriod=600 Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.899542 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="bf3982dac883d0577245bf4c2431159618a9db4dcbcbb5bac899375abccdc166" exitCode=0 Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.899611 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"bf3982dac883d0577245bf4c2431159618a9db4dcbcbb5bac899375abccdc166"} Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.900129 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c"} Dec 02 15:37:45 crc kubenswrapper[4900]: I1202 15:37:45.900158 4900 scope.go:117] "RemoveContainer" containerID="e9f290a337b13f36ba58c6b645c41fbda80c1db5e9a4eff3fbacbc26022209a3" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.377474 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rkfdw"] Dec 02 15:38:36 crc kubenswrapper[4900]: E1202 15:38:36.378531 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="extract-utilities" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.378549 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="extract-utilities" Dec 02 15:38:36 crc kubenswrapper[4900]: E1202 15:38:36.378585 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="extract-content" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.378596 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="extract-content" Dec 02 15:38:36 crc kubenswrapper[4900]: E1202 15:38:36.378610 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="registry-server" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.378618 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="registry-server" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.378969 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbccafaf-872e-4dae-93cc-a466739d6cc4" containerName="registry-server" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.380996 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.391508 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkfdw"] Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.518963 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-utilities\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.519020 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlwk\" (UniqueName: \"kubernetes.io/projected/b856300f-9895-4b14-80b6-0f2c711a5636-kube-api-access-5xlwk\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.520005 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-catalog-content\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.622537 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-utilities\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.622587 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlwk\" (UniqueName: \"kubernetes.io/projected/b856300f-9895-4b14-80b6-0f2c711a5636-kube-api-access-5xlwk\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.622638 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-catalog-content\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.623100 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-utilities\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.623134 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-catalog-content\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.642051 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlwk\" (UniqueName: \"kubernetes.io/projected/b856300f-9895-4b14-80b6-0f2c711a5636-kube-api-access-5xlwk\") pod \"certified-operators-rkfdw\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:36 crc kubenswrapper[4900]: I1202 15:38:36.711729 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:37 crc kubenswrapper[4900]: I1202 15:38:37.281849 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkfdw"] Dec 02 15:38:37 crc kubenswrapper[4900]: I1202 15:38:37.456136 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerStarted","Data":"11a2e81e6d2244848036b768271166d7e5a3cbfaa9dd5ec4bae75474ca59e777"} Dec 02 15:38:38 crc kubenswrapper[4900]: I1202 15:38:38.471487 4900 generic.go:334] "Generic (PLEG): container finished" podID="b856300f-9895-4b14-80b6-0f2c711a5636" containerID="be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1" exitCode=0 Dec 02 15:38:38 crc kubenswrapper[4900]: I1202 15:38:38.471689 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerDied","Data":"be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1"} Dec 02 15:38:40 crc kubenswrapper[4900]: I1202 15:38:40.496868 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerStarted","Data":"845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf"} Dec 02 15:38:41 crc kubenswrapper[4900]: I1202 15:38:41.512291 4900 generic.go:334] "Generic (PLEG): container finished" podID="b856300f-9895-4b14-80b6-0f2c711a5636" containerID="845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf" exitCode=0 Dec 02 15:38:41 crc kubenswrapper[4900]: I1202 15:38:41.512343 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerDied","Data":"845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf"} Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.535751 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerStarted","Data":"2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1"} Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.562229 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rkfdw" podStartSLOduration=3.466714161 podStartE2EDuration="7.562207121s" podCreationTimestamp="2025-12-02 15:38:36 +0000 UTC" firstStartedPulling="2025-12-02 15:38:38.476617798 +0000 UTC m=+6963.892431689" lastFinishedPulling="2025-12-02 15:38:42.572110808 +0000 UTC m=+6967.987924649" observedRunningTime="2025-12-02 15:38:43.557317473 +0000 UTC m=+6968.973131344" watchObservedRunningTime="2025-12-02 15:38:43.562207121 +0000 UTC m=+6968.978020972" Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.803898 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xchtt"] Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.807563 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.818398 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xchtt"] Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.897434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-utilities\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.897807 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-catalog-content\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:43 crc kubenswrapper[4900]: I1202 15:38:43.898002 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48rl\" (UniqueName: \"kubernetes.io/projected/43c5a91f-9fdb-415b-b297-51da69ba8eb8-kube-api-access-w48rl\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.000204 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-utilities\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.000339 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-catalog-content\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.000446 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48rl\" (UniqueName: \"kubernetes.io/projected/43c5a91f-9fdb-415b-b297-51da69ba8eb8-kube-api-access-w48rl\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.002003 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-catalog-content\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.002251 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-utilities\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.020996 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48rl\" (UniqueName: \"kubernetes.io/projected/43c5a91f-9fdb-415b-b297-51da69ba8eb8-kube-api-access-w48rl\") pod \"community-operators-xchtt\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.129813 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:44 crc kubenswrapper[4900]: I1202 15:38:44.636395 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xchtt"] Dec 02 15:38:45 crc kubenswrapper[4900]: I1202 15:38:45.565463 4900 generic.go:334] "Generic (PLEG): container finished" podID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerID="c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be" exitCode=0 Dec 02 15:38:45 crc kubenswrapper[4900]: I1202 15:38:45.565960 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerDied","Data":"c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be"} Dec 02 15:38:45 crc kubenswrapper[4900]: I1202 15:38:45.566107 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerStarted","Data":"bd8cf80ce46856b769742a7a520f948984d9446a89329b998fd850c8948a72a8"} Dec 02 15:38:46 crc kubenswrapper[4900]: I1202 15:38:46.712979 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:46 crc kubenswrapper[4900]: I1202 15:38:46.713306 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:46 crc kubenswrapper[4900]: I1202 15:38:46.784415 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:47 crc kubenswrapper[4900]: I1202 15:38:47.622760 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerStarted","Data":"0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497"} Dec 02 15:38:47 crc kubenswrapper[4900]: I1202 15:38:47.673426 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:48 crc kubenswrapper[4900]: I1202 15:38:48.189211 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rkfdw"] Dec 02 15:38:48 crc kubenswrapper[4900]: I1202 15:38:48.638135 4900 generic.go:334] "Generic (PLEG): container finished" podID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerID="0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497" exitCode=0 Dec 02 15:38:48 crc kubenswrapper[4900]: I1202 15:38:48.638228 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerDied","Data":"0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497"} Dec 02 15:38:49 crc kubenswrapper[4900]: I1202 15:38:49.669954 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rkfdw" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="registry-server" containerID="cri-o://2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1" gracePeriod=2 Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.193863 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.300656 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-catalog-content\") pod \"b856300f-9895-4b14-80b6-0f2c711a5636\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.300704 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlwk\" (UniqueName: \"kubernetes.io/projected/b856300f-9895-4b14-80b6-0f2c711a5636-kube-api-access-5xlwk\") pod \"b856300f-9895-4b14-80b6-0f2c711a5636\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.300770 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-utilities\") pod \"b856300f-9895-4b14-80b6-0f2c711a5636\" (UID: \"b856300f-9895-4b14-80b6-0f2c711a5636\") " Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.301735 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-utilities" (OuterVolumeSpecName: "utilities") pod "b856300f-9895-4b14-80b6-0f2c711a5636" (UID: "b856300f-9895-4b14-80b6-0f2c711a5636"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.305892 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b856300f-9895-4b14-80b6-0f2c711a5636-kube-api-access-5xlwk" (OuterVolumeSpecName: "kube-api-access-5xlwk") pod "b856300f-9895-4b14-80b6-0f2c711a5636" (UID: "b856300f-9895-4b14-80b6-0f2c711a5636"). InnerVolumeSpecName "kube-api-access-5xlwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.363798 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b856300f-9895-4b14-80b6-0f2c711a5636" (UID: "b856300f-9895-4b14-80b6-0f2c711a5636"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.403271 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.403493 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlwk\" (UniqueName: \"kubernetes.io/projected/b856300f-9895-4b14-80b6-0f2c711a5636-kube-api-access-5xlwk\") on node \"crc\" DevicePath \"\"" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.403509 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b856300f-9895-4b14-80b6-0f2c711a5636-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.684358 4900 generic.go:334] "Generic (PLEG): container finished" podID="b856300f-9895-4b14-80b6-0f2c711a5636" containerID="2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1" exitCode=0 Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.684461 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerDied","Data":"2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1"} Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.684486 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkfdw" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.684513 4900 scope.go:117] "RemoveContainer" containerID="2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.684498 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkfdw" event={"ID":"b856300f-9895-4b14-80b6-0f2c711a5636","Type":"ContainerDied","Data":"11a2e81e6d2244848036b768271166d7e5a3cbfaa9dd5ec4bae75474ca59e777"} Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.691155 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerStarted","Data":"68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9"} Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.713031 4900 scope.go:117] "RemoveContainer" containerID="845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.726503 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xchtt" podStartSLOduration=3.726346642 podStartE2EDuration="7.726483939s" podCreationTimestamp="2025-12-02 15:38:43 +0000 UTC" firstStartedPulling="2025-12-02 15:38:45.568444489 +0000 UTC m=+6970.984258350" lastFinishedPulling="2025-12-02 15:38:49.568581776 +0000 UTC m=+6974.984395647" observedRunningTime="2025-12-02 15:38:50.721349845 +0000 UTC m=+6976.137163746" watchObservedRunningTime="2025-12-02 15:38:50.726483939 +0000 UTC m=+6976.142297800" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.740053 4900 scope.go:117] "RemoveContainer" containerID="be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.774455 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rkfdw"] Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.784818 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rkfdw"] Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.801967 4900 scope.go:117] "RemoveContainer" containerID="2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1" Dec 02 15:38:50 crc kubenswrapper[4900]: E1202 15:38:50.802607 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1\": container with ID starting with 2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1 not found: ID does not exist" containerID="2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.802757 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1"} err="failed to get container status \"2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1\": rpc error: code = NotFound desc = could not find container \"2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1\": container with ID starting with 2e394842a36f72adac6935d2a4a23cc78cd80879f5b877b3c29f846086d7fde1 not found: ID does not exist" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.802853 4900 scope.go:117] "RemoveContainer" containerID="845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf" Dec 02 15:38:50 crc kubenswrapper[4900]: E1202 15:38:50.803204 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf\": container with ID starting with 845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf not found: ID does not exist" containerID="845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.803301 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf"} err="failed to get container status \"845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf\": rpc error: code = NotFound desc = could not find container \"845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf\": container with ID starting with 845b52b550355606bd1f4ce867bc57360305edda02cd17dc3fd81e618fc95eaf not found: ID does not exist" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.803395 4900 scope.go:117] "RemoveContainer" containerID="be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1" Dec 02 15:38:50 crc kubenswrapper[4900]: E1202 15:38:50.803696 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1\": container with ID starting with be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1 not found: ID does not exist" containerID="be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.803806 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1"} err="failed to get container status \"be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1\": rpc error: code = NotFound desc = could not find container \"be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1\": container with ID starting with be6d25386e4f54e3e00de89d45a63d9397070a52aa00953ee4ad8d4208bad8f1 not found: ID does not exist" Dec 02 15:38:50 crc kubenswrapper[4900]: I1202 15:38:50.927209 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" path="/var/lib/kubelet/pods/b856300f-9895-4b14-80b6-0f2c711a5636/volumes" Dec 02 15:38:54 crc kubenswrapper[4900]: I1202 15:38:54.130788 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:54 crc kubenswrapper[4900]: I1202 15:38:54.131767 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:54 crc kubenswrapper[4900]: I1202 15:38:54.187826 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:54 crc kubenswrapper[4900]: I1202 15:38:54.834957 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:55 crc kubenswrapper[4900]: I1202 15:38:55.374387 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xchtt"] Dec 02 15:38:56 crc kubenswrapper[4900]: I1202 15:38:56.763552 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xchtt" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="registry-server" containerID="cri-o://68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9" gracePeriod=2 Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.413790 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.576637 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w48rl\" (UniqueName: \"kubernetes.io/projected/43c5a91f-9fdb-415b-b297-51da69ba8eb8-kube-api-access-w48rl\") pod \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.576757 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-catalog-content\") pod \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.576835 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-utilities\") pod \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\" (UID: \"43c5a91f-9fdb-415b-b297-51da69ba8eb8\") " Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.578003 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-utilities" (OuterVolumeSpecName: "utilities") pod "43c5a91f-9fdb-415b-b297-51da69ba8eb8" (UID: "43c5a91f-9fdb-415b-b297-51da69ba8eb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.590574 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c5a91f-9fdb-415b-b297-51da69ba8eb8-kube-api-access-w48rl" (OuterVolumeSpecName: "kube-api-access-w48rl") pod "43c5a91f-9fdb-415b-b297-51da69ba8eb8" (UID: "43c5a91f-9fdb-415b-b297-51da69ba8eb8"). InnerVolumeSpecName "kube-api-access-w48rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.649571 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43c5a91f-9fdb-415b-b297-51da69ba8eb8" (UID: "43c5a91f-9fdb-415b-b297-51da69ba8eb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.679618 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w48rl\" (UniqueName: \"kubernetes.io/projected/43c5a91f-9fdb-415b-b297-51da69ba8eb8-kube-api-access-w48rl\") on node \"crc\" DevicePath \"\"" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.679671 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.679682 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c5a91f-9fdb-415b-b297-51da69ba8eb8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.779450 4900 generic.go:334] "Generic (PLEG): container finished" podID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerID="68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9" exitCode=0 Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.779490 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerDied","Data":"68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9"} Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.779518 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xchtt" event={"ID":"43c5a91f-9fdb-415b-b297-51da69ba8eb8","Type":"ContainerDied","Data":"bd8cf80ce46856b769742a7a520f948984d9446a89329b998fd850c8948a72a8"} Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.779538 4900 scope.go:117] "RemoveContainer" containerID="68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.779583 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xchtt" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.805278 4900 scope.go:117] "RemoveContainer" containerID="0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.858461 4900 scope.go:117] "RemoveContainer" containerID="c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.870481 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xchtt"] Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.881878 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xchtt"] Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.892749 4900 scope.go:117] "RemoveContainer" containerID="68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9" Dec 02 15:38:57 crc kubenswrapper[4900]: E1202 15:38:57.893180 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9\": container with ID starting with 68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9 not found: ID does not exist" containerID="68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.893229 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9"} err="failed to get container status \"68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9\": rpc error: code = NotFound desc = could not find container \"68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9\": container with ID starting with 68bdecc010fec2ab7041ee9f2d700585b031f7cc5233a75af962ff2886e7b3f9 not found: ID does not exist" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.893258 4900 scope.go:117] "RemoveContainer" containerID="0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497" Dec 02 15:38:57 crc kubenswrapper[4900]: E1202 15:38:57.894178 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497\": container with ID starting with 0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497 not found: ID does not exist" containerID="0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.894210 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497"} err="failed to get container status \"0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497\": rpc error: code = NotFound desc = could not find container \"0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497\": container with ID starting with 0283bfe797d8939f5519b571b0770c6177d47cc90222f1ba63ce3ac7fbe75497 not found: ID does not exist" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.894229 4900 scope.go:117] "RemoveContainer" containerID="c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be" Dec 02 15:38:57 crc kubenswrapper[4900]: E1202 15:38:57.894532 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be\": container with ID starting with c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be not found: ID does not exist" containerID="c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be" Dec 02 15:38:57 crc kubenswrapper[4900]: I1202 15:38:57.894553 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be"} err="failed to get container status \"c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be\": rpc error: code = NotFound desc = could not find container \"c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be\": container with ID starting with c652b08d1f77ff52934dfe30a3c28e03a0d244a0fb41ce5fc88cc67c84f875be not found: ID does not exist" Dec 02 15:38:58 crc kubenswrapper[4900]: I1202 15:38:58.928891 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" path="/var/lib/kubelet/pods/43c5a91f-9fdb-415b-b297-51da69ba8eb8/volumes" Dec 02 15:39:19 crc kubenswrapper[4900]: I1202 15:39:19.015603 4900 generic.go:334] "Generic (PLEG): container finished" podID="8a8021fa-4038-4f47-ac57-f800a48e293a" containerID="f02c4dcac42079777dee927eb0b93d9d5f9fb792299a6062f909cf9b9cf4f681" exitCode=0 Dec 02 15:39:19 crc kubenswrapper[4900]: I1202 15:39:19.015733 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" event={"ID":"8a8021fa-4038-4f47-ac57-f800a48e293a","Type":"ContainerDied","Data":"f02c4dcac42079777dee927eb0b93d9d5f9fb792299a6062f909cf9b9cf4f681"} Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.511919 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.576749 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6rl4\" (UniqueName: \"kubernetes.io/projected/8a8021fa-4038-4f47-ac57-f800a48e293a-kube-api-access-m6rl4\") pod \"8a8021fa-4038-4f47-ac57-f800a48e293a\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.576803 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ceph\") pod \"8a8021fa-4038-4f47-ac57-f800a48e293a\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.576839 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-tripleo-cleanup-combined-ca-bundle\") pod \"8a8021fa-4038-4f47-ac57-f800a48e293a\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.577018 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-inventory\") pod \"8a8021fa-4038-4f47-ac57-f800a48e293a\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.577121 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ssh-key\") pod \"8a8021fa-4038-4f47-ac57-f800a48e293a\" (UID: \"8a8021fa-4038-4f47-ac57-f800a48e293a\") " Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.583908 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ceph" (OuterVolumeSpecName: "ceph") pod "8a8021fa-4038-4f47-ac57-f800a48e293a" (UID: "8a8021fa-4038-4f47-ac57-f800a48e293a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.583920 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "8a8021fa-4038-4f47-ac57-f800a48e293a" (UID: "8a8021fa-4038-4f47-ac57-f800a48e293a"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.584021 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8021fa-4038-4f47-ac57-f800a48e293a-kube-api-access-m6rl4" (OuterVolumeSpecName: "kube-api-access-m6rl4") pod "8a8021fa-4038-4f47-ac57-f800a48e293a" (UID: "8a8021fa-4038-4f47-ac57-f800a48e293a"). InnerVolumeSpecName "kube-api-access-m6rl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.608419 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8a8021fa-4038-4f47-ac57-f800a48e293a" (UID: "8a8021fa-4038-4f47-ac57-f800a48e293a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.625070 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-inventory" (OuterVolumeSpecName: "inventory") pod "8a8021fa-4038-4f47-ac57-f800a48e293a" (UID: "8a8021fa-4038-4f47-ac57-f800a48e293a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.680970 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6rl4\" (UniqueName: \"kubernetes.io/projected/8a8021fa-4038-4f47-ac57-f800a48e293a-kube-api-access-m6rl4\") on node \"crc\" DevicePath \"\"" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.681001 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.681012 4900 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.681022 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:39:20 crc kubenswrapper[4900]: I1202 15:39:20.681032 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a8021fa-4038-4f47-ac57-f800a48e293a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:39:21 crc kubenswrapper[4900]: I1202 15:39:21.037556 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" event={"ID":"8a8021fa-4038-4f47-ac57-f800a48e293a","Type":"ContainerDied","Data":"48a5d182ac79a4352e952c497b9af9b54b55bd45760f489db27a7d3fd4cdde29"} Dec 02 15:39:21 crc kubenswrapper[4900]: I1202 15:39:21.037881 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a5d182ac79a4352e952c497b9af9b54b55bd45760f489db27a7d3fd4cdde29" Dec 02 15:39:21 crc kubenswrapper[4900]: I1202 15:39:21.037803 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.742708 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-59jqm"] Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.743869 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="extract-content" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.743890 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="extract-content" Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.743915 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="extract-utilities" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.743926 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="extract-utilities" Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.743946 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="registry-server" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.743958 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="registry-server" Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.743992 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="extract-utilities" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744002 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="extract-utilities" Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.744014 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="extract-content" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744025 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="extract-content" Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.744043 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="registry-server" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744050 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="registry-server" Dec 02 15:39:29 crc kubenswrapper[4900]: E1202 15:39:29.744062 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8021fa-4038-4f47-ac57-f800a48e293a" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744071 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8021fa-4038-4f47-ac57-f800a48e293a" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744351 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b856300f-9895-4b14-80b6-0f2c711a5636" containerName="registry-server" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744383 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c5a91f-9fdb-415b-b297-51da69ba8eb8" containerName="registry-server" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.744409 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8021fa-4038-4f47-ac57-f800a48e293a" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.745371 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.750576 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.750874 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.751499 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.751599 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.771467 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-59jqm"] Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.792417 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkl9f\" (UniqueName: \"kubernetes.io/projected/c354e156-0a05-4523-85a4-ce5d110c449a-kube-api-access-hkl9f\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.792548 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ceph\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.792622 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.792637 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.792714 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-inventory\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.894366 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkl9f\" (UniqueName: \"kubernetes.io/projected/c354e156-0a05-4523-85a4-ce5d110c449a-kube-api-access-hkl9f\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.894564 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ceph\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.894818 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.894870 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.894916 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-inventory\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.900326 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.900559 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-inventory\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.902319 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ceph\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.909539 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:29 crc kubenswrapper[4900]: I1202 15:39:29.911879 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkl9f\" (UniqueName: \"kubernetes.io/projected/c354e156-0a05-4523-85a4-ce5d110c449a-kube-api-access-hkl9f\") pod \"bootstrap-openstack-openstack-cell1-59jqm\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:30 crc kubenswrapper[4900]: I1202 15:39:30.069034 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:39:30 crc kubenswrapper[4900]: I1202 15:39:30.655201 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-59jqm"] Dec 02 15:39:31 crc kubenswrapper[4900]: I1202 15:39:31.161947 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" event={"ID":"c354e156-0a05-4523-85a4-ce5d110c449a","Type":"ContainerStarted","Data":"fd9f70ca7a6b50806eb03322ae9b3c822e4827e9d10bb3debcde189578d76454"} Dec 02 15:39:32 crc kubenswrapper[4900]: I1202 15:39:32.173001 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" event={"ID":"c354e156-0a05-4523-85a4-ce5d110c449a","Type":"ContainerStarted","Data":"88cd2b9995fb9d146651ebbe397fa509cef8229e36122ceb436987f30ab31cec"} Dec 02 15:39:32 crc kubenswrapper[4900]: I1202 15:39:32.197394 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" podStartSLOduration=2.462648451 podStartE2EDuration="3.19737451s" podCreationTimestamp="2025-12-02 15:39:29 +0000 UTC" firstStartedPulling="2025-12-02 15:39:30.663792798 +0000 UTC m=+7016.079606659" lastFinishedPulling="2025-12-02 15:39:31.398518857 +0000 UTC m=+7016.814332718" observedRunningTime="2025-12-02 15:39:32.19062209 +0000 UTC m=+7017.606435951" watchObservedRunningTime="2025-12-02 15:39:32.19737451 +0000 UTC m=+7017.613188361" Dec 02 15:39:45 crc kubenswrapper[4900]: I1202 15:39:45.116215 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:39:45 crc kubenswrapper[4900]: I1202 15:39:45.116896 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:40:15 crc kubenswrapper[4900]: I1202 15:40:15.116800 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:40:15 crc kubenswrapper[4900]: I1202 15:40:15.117361 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:40:45 crc kubenswrapper[4900]: I1202 15:40:45.117200 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:40:45 crc kubenswrapper[4900]: I1202 15:40:45.117924 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:40:45 crc kubenswrapper[4900]: I1202 15:40:45.118013 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:40:45 crc kubenswrapper[4900]: I1202 15:40:45.118897 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:40:45 crc kubenswrapper[4900]: I1202 15:40:45.118986 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" gracePeriod=600 Dec 02 15:40:45 crc kubenswrapper[4900]: E1202 15:40:45.251989 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:40:46 crc kubenswrapper[4900]: I1202 15:40:46.178135 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" exitCode=0 Dec 02 15:40:46 crc kubenswrapper[4900]: I1202 15:40:46.178230 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c"} Dec 02 15:40:46 crc kubenswrapper[4900]: I1202 15:40:46.178544 4900 scope.go:117] "RemoveContainer" containerID="bf3982dac883d0577245bf4c2431159618a9db4dcbcbb5bac899375abccdc166" Dec 02 15:40:46 crc kubenswrapper[4900]: I1202 15:40:46.179537 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:40:46 crc kubenswrapper[4900]: E1202 15:40:46.180086 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:41:00 crc kubenswrapper[4900]: I1202 15:41:00.910700 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:41:00 crc kubenswrapper[4900]: E1202 15:41:00.911425 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:41:14 crc kubenswrapper[4900]: I1202 15:41:14.916636 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:41:14 crc kubenswrapper[4900]: E1202 15:41:14.917712 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:41:26 crc kubenswrapper[4900]: I1202 15:41:26.910591 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:41:26 crc kubenswrapper[4900]: E1202 15:41:26.911548 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:41:41 crc kubenswrapper[4900]: I1202 15:41:41.910846 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:41:41 crc kubenswrapper[4900]: E1202 15:41:41.911481 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:41:55 crc kubenswrapper[4900]: I1202 15:41:55.910359 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:41:55 crc kubenswrapper[4900]: E1202 15:41:55.911088 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:42:07 crc kubenswrapper[4900]: I1202 15:42:07.910587 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:42:07 crc kubenswrapper[4900]: E1202 15:42:07.914091 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:42:21 crc kubenswrapper[4900]: I1202 15:42:21.911017 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:42:21 crc kubenswrapper[4900]: E1202 15:42:21.911832 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:42:33 crc kubenswrapper[4900]: I1202 15:42:33.910233 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:42:33 crc kubenswrapper[4900]: E1202 15:42:33.911085 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:42:35 crc kubenswrapper[4900]: I1202 15:42:35.428179 4900 generic.go:334] "Generic (PLEG): container finished" podID="c354e156-0a05-4523-85a4-ce5d110c449a" containerID="88cd2b9995fb9d146651ebbe397fa509cef8229e36122ceb436987f30ab31cec" exitCode=0 Dec 02 15:42:35 crc kubenswrapper[4900]: I1202 15:42:35.428304 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" event={"ID":"c354e156-0a05-4523-85a4-ce5d110c449a","Type":"ContainerDied","Data":"88cd2b9995fb9d146651ebbe397fa509cef8229e36122ceb436987f30ab31cec"} Dec 02 15:42:36 crc kubenswrapper[4900]: I1202 15:42:36.957593 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.116083 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ceph\") pod \"c354e156-0a05-4523-85a4-ce5d110c449a\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.116188 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkl9f\" (UniqueName: \"kubernetes.io/projected/c354e156-0a05-4523-85a4-ce5d110c449a-kube-api-access-hkl9f\") pod \"c354e156-0a05-4523-85a4-ce5d110c449a\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.116432 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ssh-key\") pod \"c354e156-0a05-4523-85a4-ce5d110c449a\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.116466 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-inventory\") pod \"c354e156-0a05-4523-85a4-ce5d110c449a\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.116488 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-bootstrap-combined-ca-bundle\") pod \"c354e156-0a05-4523-85a4-ce5d110c449a\" (UID: \"c354e156-0a05-4523-85a4-ce5d110c449a\") " Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.122204 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c354e156-0a05-4523-85a4-ce5d110c449a" (UID: "c354e156-0a05-4523-85a4-ce5d110c449a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.122487 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c354e156-0a05-4523-85a4-ce5d110c449a-kube-api-access-hkl9f" (OuterVolumeSpecName: "kube-api-access-hkl9f") pod "c354e156-0a05-4523-85a4-ce5d110c449a" (UID: "c354e156-0a05-4523-85a4-ce5d110c449a"). InnerVolumeSpecName "kube-api-access-hkl9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.123835 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ceph" (OuterVolumeSpecName: "ceph") pod "c354e156-0a05-4523-85a4-ce5d110c449a" (UID: "c354e156-0a05-4523-85a4-ce5d110c449a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.150610 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c354e156-0a05-4523-85a4-ce5d110c449a" (UID: "c354e156-0a05-4523-85a4-ce5d110c449a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.155228 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-inventory" (OuterVolumeSpecName: "inventory") pod "c354e156-0a05-4523-85a4-ce5d110c449a" (UID: "c354e156-0a05-4523-85a4-ce5d110c449a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.220992 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkl9f\" (UniqueName: \"kubernetes.io/projected/c354e156-0a05-4523-85a4-ce5d110c449a-kube-api-access-hkl9f\") on node \"crc\" DevicePath \"\"" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.221026 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.221037 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.221046 4900 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.221055 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c354e156-0a05-4523-85a4-ce5d110c449a-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.456065 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" event={"ID":"c354e156-0a05-4523-85a4-ce5d110c449a","Type":"ContainerDied","Data":"fd9f70ca7a6b50806eb03322ae9b3c822e4827e9d10bb3debcde189578d76454"} Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.456119 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9f70ca7a6b50806eb03322ae9b3c822e4827e9d10bb3debcde189578d76454" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.456183 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-59jqm" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.564963 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-7bk9z"] Dec 02 15:42:37 crc kubenswrapper[4900]: E1202 15:42:37.565491 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c354e156-0a05-4523-85a4-ce5d110c449a" containerName="bootstrap-openstack-openstack-cell1" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.565506 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c354e156-0a05-4523-85a4-ce5d110c449a" containerName="bootstrap-openstack-openstack-cell1" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.565768 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c354e156-0a05-4523-85a4-ce5d110c449a" containerName="bootstrap-openstack-openstack-cell1" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.566515 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.569475 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.570006 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.570456 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.576265 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.586152 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-7bk9z"] Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.631970 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ssh-key\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.632075 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-inventory\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.632146 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ceph\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.632170 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkztt\" (UniqueName: \"kubernetes.io/projected/85f28781-7762-4415-86d3-4c0b9bc08e2e-kube-api-access-bkztt\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.735085 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ssh-key\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.735255 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-inventory\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.735352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ceph\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.735391 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkztt\" (UniqueName: \"kubernetes.io/projected/85f28781-7762-4415-86d3-4c0b9bc08e2e-kube-api-access-bkztt\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.741607 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ssh-key\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.741610 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-inventory\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.744516 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ceph\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.752510 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkztt\" (UniqueName: \"kubernetes.io/projected/85f28781-7762-4415-86d3-4c0b9bc08e2e-kube-api-access-bkztt\") pod \"download-cache-openstack-openstack-cell1-7bk9z\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:37 crc kubenswrapper[4900]: I1202 15:42:37.908090 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:42:38 crc kubenswrapper[4900]: I1202 15:42:38.538379 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-7bk9z"] Dec 02 15:42:38 crc kubenswrapper[4900]: I1202 15:42:38.539474 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:42:39 crc kubenswrapper[4900]: I1202 15:42:39.475048 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" event={"ID":"85f28781-7762-4415-86d3-4c0b9bc08e2e","Type":"ContainerStarted","Data":"30211cc3d12f4ddd762a84443514ac755458622c6d013da84f25094a30a3a051"} Dec 02 15:42:40 crc kubenswrapper[4900]: I1202 15:42:40.487348 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" event={"ID":"85f28781-7762-4415-86d3-4c0b9bc08e2e","Type":"ContainerStarted","Data":"d7596c7a132be20e6669899de4bd652b8031a3b42af836ce7c58d35a9e4906fe"} Dec 02 15:42:40 crc kubenswrapper[4900]: I1202 15:42:40.512003 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" podStartSLOduration=2.960177312 podStartE2EDuration="3.511980675s" podCreationTimestamp="2025-12-02 15:42:37 +0000 UTC" firstStartedPulling="2025-12-02 15:42:38.53911752 +0000 UTC m=+7203.954931381" lastFinishedPulling="2025-12-02 15:42:39.090920863 +0000 UTC m=+7204.506734744" observedRunningTime="2025-12-02 15:42:40.506162261 +0000 UTC m=+7205.921976112" watchObservedRunningTime="2025-12-02 15:42:40.511980675 +0000 UTC m=+7205.927794546" Dec 02 15:42:46 crc kubenswrapper[4900]: I1202 15:42:46.910623 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:42:46 crc kubenswrapper[4900]: E1202 15:42:46.911480 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:42:57 crc kubenswrapper[4900]: I1202 15:42:57.911538 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:42:57 crc kubenswrapper[4900]: E1202 15:42:57.912959 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:43:10 crc kubenswrapper[4900]: I1202 15:43:10.912403 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:43:10 crc kubenswrapper[4900]: E1202 15:43:10.913566 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:43:25 crc kubenswrapper[4900]: I1202 15:43:25.911060 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:43:25 crc kubenswrapper[4900]: E1202 15:43:25.911995 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:43:36 crc kubenswrapper[4900]: I1202 15:43:36.911935 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:43:36 crc kubenswrapper[4900]: E1202 15:43:36.913049 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:43:49 crc kubenswrapper[4900]: I1202 15:43:49.911980 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:43:49 crc kubenswrapper[4900]: E1202 15:43:49.913173 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:44:02 crc kubenswrapper[4900]: I1202 15:44:02.910233 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:44:02 crc kubenswrapper[4900]: E1202 15:44:02.911450 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:44:11 crc kubenswrapper[4900]: E1202 15:44:11.718103 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85f28781_7762_4415_86d3_4c0b9bc08e2e.slice/crio-d7596c7a132be20e6669899de4bd652b8031a3b42af836ce7c58d35a9e4906fe.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:44:12 crc kubenswrapper[4900]: I1202 15:44:12.554494 4900 generic.go:334] "Generic (PLEG): container finished" podID="85f28781-7762-4415-86d3-4c0b9bc08e2e" containerID="d7596c7a132be20e6669899de4bd652b8031a3b42af836ce7c58d35a9e4906fe" exitCode=0 Dec 02 15:44:12 crc kubenswrapper[4900]: I1202 15:44:12.554753 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" event={"ID":"85f28781-7762-4415-86d3-4c0b9bc08e2e","Type":"ContainerDied","Data":"d7596c7a132be20e6669899de4bd652b8031a3b42af836ce7c58d35a9e4906fe"} Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.053491 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.096517 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkztt\" (UniqueName: \"kubernetes.io/projected/85f28781-7762-4415-86d3-4c0b9bc08e2e-kube-api-access-bkztt\") pod \"85f28781-7762-4415-86d3-4c0b9bc08e2e\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.096574 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ceph\") pod \"85f28781-7762-4415-86d3-4c0b9bc08e2e\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.096661 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ssh-key\") pod \"85f28781-7762-4415-86d3-4c0b9bc08e2e\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.096705 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-inventory\") pod \"85f28781-7762-4415-86d3-4c0b9bc08e2e\" (UID: \"85f28781-7762-4415-86d3-4c0b9bc08e2e\") " Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.102099 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ceph" (OuterVolumeSpecName: "ceph") pod "85f28781-7762-4415-86d3-4c0b9bc08e2e" (UID: "85f28781-7762-4415-86d3-4c0b9bc08e2e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.104136 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f28781-7762-4415-86d3-4c0b9bc08e2e-kube-api-access-bkztt" (OuterVolumeSpecName: "kube-api-access-bkztt") pod "85f28781-7762-4415-86d3-4c0b9bc08e2e" (UID: "85f28781-7762-4415-86d3-4c0b9bc08e2e"). InnerVolumeSpecName "kube-api-access-bkztt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.134741 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85f28781-7762-4415-86d3-4c0b9bc08e2e" (UID: "85f28781-7762-4415-86d3-4c0b9bc08e2e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.139898 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-inventory" (OuterVolumeSpecName: "inventory") pod "85f28781-7762-4415-86d3-4c0b9bc08e2e" (UID: "85f28781-7762-4415-86d3-4c0b9bc08e2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.198816 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkztt\" (UniqueName: \"kubernetes.io/projected/85f28781-7762-4415-86d3-4c0b9bc08e2e-kube-api-access-bkztt\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.198848 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.198857 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.198864 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85f28781-7762-4415-86d3-4c0b9bc08e2e-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.575962 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" event={"ID":"85f28781-7762-4415-86d3-4c0b9bc08e2e","Type":"ContainerDied","Data":"30211cc3d12f4ddd762a84443514ac755458622c6d013da84f25094a30a3a051"} Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.576277 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30211cc3d12f4ddd762a84443514ac755458622c6d013da84f25094a30a3a051" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.576149 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7bk9z" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.660876 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kfvbg"] Dec 02 15:44:14 crc kubenswrapper[4900]: E1202 15:44:14.661267 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f28781-7762-4415-86d3-4c0b9bc08e2e" containerName="download-cache-openstack-openstack-cell1" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.661284 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f28781-7762-4415-86d3-4c0b9bc08e2e" containerName="download-cache-openstack-openstack-cell1" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.661502 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f28781-7762-4415-86d3-4c0b9bc08e2e" containerName="download-cache-openstack-openstack-cell1" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.662250 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.665281 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.665624 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.665791 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.666490 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.676261 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kfvbg"] Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.710002 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-inventory\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.710056 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ceph\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.710100 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4p6f\" (UniqueName: \"kubernetes.io/projected/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-kube-api-access-z4p6f\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.710218 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ssh-key\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.812007 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-inventory\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.812057 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ceph\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.812093 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4p6f\" (UniqueName: \"kubernetes.io/projected/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-kube-api-access-z4p6f\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.812138 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ssh-key\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.817883 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-inventory\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.820554 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ceph\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.824942 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ssh-key\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.829287 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4p6f\" (UniqueName: \"kubernetes.io/projected/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-kube-api-access-z4p6f\") pod \"configure-network-openstack-openstack-cell1-kfvbg\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.920616 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:44:14 crc kubenswrapper[4900]: E1202 15:44:14.921408 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:44:14 crc kubenswrapper[4900]: I1202 15:44:14.981027 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:44:15 crc kubenswrapper[4900]: I1202 15:44:15.576990 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kfvbg"] Dec 02 15:44:16 crc kubenswrapper[4900]: I1202 15:44:16.601776 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" event={"ID":"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5","Type":"ContainerStarted","Data":"98db12ca01419e2546fff6ab013ec0a4d5d6eb20e4e7137320bd6ee9780a6f8d"} Dec 02 15:44:17 crc kubenswrapper[4900]: I1202 15:44:17.619507 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" event={"ID":"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5","Type":"ContainerStarted","Data":"1b181eab9946ebe471cf22557dda3b6870b8d5f401492901890b46ec63f293c0"} Dec 02 15:44:17 crc kubenswrapper[4900]: I1202 15:44:17.647008 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" podStartSLOduration=2.465577091 podStartE2EDuration="3.646978527s" podCreationTimestamp="2025-12-02 15:44:14 +0000 UTC" firstStartedPulling="2025-12-02 15:44:15.580765414 +0000 UTC m=+7300.996579295" lastFinishedPulling="2025-12-02 15:44:16.76216687 +0000 UTC m=+7302.177980731" observedRunningTime="2025-12-02 15:44:17.634720042 +0000 UTC m=+7303.050533893" watchObservedRunningTime="2025-12-02 15:44:17.646978527 +0000 UTC m=+7303.062792368" Dec 02 15:44:29 crc kubenswrapper[4900]: I1202 15:44:29.910436 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:44:29 crc kubenswrapper[4900]: E1202 15:44:29.911468 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.247518 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grjvd"] Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.250292 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.262420 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjvd"] Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.425711 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68jz\" (UniqueName: \"kubernetes.io/projected/0be63e9d-c212-4b2c-a243-7f4d631f0168-kube-api-access-k68jz\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.426141 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-utilities\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.426204 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-catalog-content\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.528352 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k68jz\" (UniqueName: \"kubernetes.io/projected/0be63e9d-c212-4b2c-a243-7f4d631f0168-kube-api-access-k68jz\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.528475 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-utilities\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.528534 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-catalog-content\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.529061 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-catalog-content\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.529202 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-utilities\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.555896 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k68jz\" (UniqueName: \"kubernetes.io/projected/0be63e9d-c212-4b2c-a243-7f4d631f0168-kube-api-access-k68jz\") pod \"redhat-marketplace-grjvd\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:32 crc kubenswrapper[4900]: I1202 15:44:32.576494 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:33 crc kubenswrapper[4900]: I1202 15:44:33.064136 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjvd"] Dec 02 15:44:33 crc kubenswrapper[4900]: I1202 15:44:33.830183 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerID="ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766" exitCode=0 Dec 02 15:44:33 crc kubenswrapper[4900]: I1202 15:44:33.830394 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjvd" event={"ID":"0be63e9d-c212-4b2c-a243-7f4d631f0168","Type":"ContainerDied","Data":"ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766"} Dec 02 15:44:33 crc kubenswrapper[4900]: I1202 15:44:33.830526 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjvd" event={"ID":"0be63e9d-c212-4b2c-a243-7f4d631f0168","Type":"ContainerStarted","Data":"4c83494ce68676eda2a815169d4714dd528d798de5dc19a2df6e1996f6b48161"} Dec 02 15:44:35 crc kubenswrapper[4900]: I1202 15:44:35.851693 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerID="75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba" exitCode=0 Dec 02 15:44:35 crc kubenswrapper[4900]: I1202 15:44:35.851733 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjvd" event={"ID":"0be63e9d-c212-4b2c-a243-7f4d631f0168","Type":"ContainerDied","Data":"75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba"} Dec 02 15:44:36 crc kubenswrapper[4900]: I1202 15:44:36.876119 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjvd" event={"ID":"0be63e9d-c212-4b2c-a243-7f4d631f0168","Type":"ContainerStarted","Data":"fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce"} Dec 02 15:44:36 crc kubenswrapper[4900]: I1202 15:44:36.914157 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grjvd" podStartSLOduration=2.293649312 podStartE2EDuration="4.914132967s" podCreationTimestamp="2025-12-02 15:44:32 +0000 UTC" firstStartedPulling="2025-12-02 15:44:33.832470721 +0000 UTC m=+7319.248284582" lastFinishedPulling="2025-12-02 15:44:36.452954386 +0000 UTC m=+7321.868768237" observedRunningTime="2025-12-02 15:44:36.90324222 +0000 UTC m=+7322.319056081" watchObservedRunningTime="2025-12-02 15:44:36.914132967 +0000 UTC m=+7322.329946838" Dec 02 15:44:41 crc kubenswrapper[4900]: I1202 15:44:41.911114 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:44:41 crc kubenswrapper[4900]: E1202 15:44:41.912018 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:44:42 crc kubenswrapper[4900]: I1202 15:44:42.577519 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:42 crc kubenswrapper[4900]: I1202 15:44:42.577566 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:42 crc kubenswrapper[4900]: I1202 15:44:42.645033 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:43 crc kubenswrapper[4900]: I1202 15:44:43.004593 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:43 crc kubenswrapper[4900]: I1202 15:44:43.055288 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjvd"] Dec 02 15:44:44 crc kubenswrapper[4900]: I1202 15:44:44.974752 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grjvd" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="registry-server" containerID="cri-o://fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce" gracePeriod=2 Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.545713 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.650391 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k68jz\" (UniqueName: \"kubernetes.io/projected/0be63e9d-c212-4b2c-a243-7f4d631f0168-kube-api-access-k68jz\") pod \"0be63e9d-c212-4b2c-a243-7f4d631f0168\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.650475 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-catalog-content\") pod \"0be63e9d-c212-4b2c-a243-7f4d631f0168\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.650590 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-utilities\") pod \"0be63e9d-c212-4b2c-a243-7f4d631f0168\" (UID: \"0be63e9d-c212-4b2c-a243-7f4d631f0168\") " Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.651724 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-utilities" (OuterVolumeSpecName: "utilities") pod "0be63e9d-c212-4b2c-a243-7f4d631f0168" (UID: "0be63e9d-c212-4b2c-a243-7f4d631f0168"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.657161 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be63e9d-c212-4b2c-a243-7f4d631f0168-kube-api-access-k68jz" (OuterVolumeSpecName: "kube-api-access-k68jz") pod "0be63e9d-c212-4b2c-a243-7f4d631f0168" (UID: "0be63e9d-c212-4b2c-a243-7f4d631f0168"). InnerVolumeSpecName "kube-api-access-k68jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.684604 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0be63e9d-c212-4b2c-a243-7f4d631f0168" (UID: "0be63e9d-c212-4b2c-a243-7f4d631f0168"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.753364 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k68jz\" (UniqueName: \"kubernetes.io/projected/0be63e9d-c212-4b2c-a243-7f4d631f0168-kube-api-access-k68jz\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.753413 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.753429 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be63e9d-c212-4b2c-a243-7f4d631f0168-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.988251 4900 generic.go:334] "Generic (PLEG): container finished" podID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerID="fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce" exitCode=0 Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.988291 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjvd" event={"ID":"0be63e9d-c212-4b2c-a243-7f4d631f0168","Type":"ContainerDied","Data":"fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce"} Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.988318 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjvd" event={"ID":"0be63e9d-c212-4b2c-a243-7f4d631f0168","Type":"ContainerDied","Data":"4c83494ce68676eda2a815169d4714dd528d798de5dc19a2df6e1996f6b48161"} Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.988337 4900 scope.go:117] "RemoveContainer" containerID="fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce" Dec 02 15:44:45 crc kubenswrapper[4900]: I1202 15:44:45.988354 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjvd" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.018524 4900 scope.go:117] "RemoveContainer" containerID="75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.066394 4900 scope.go:117] "RemoveContainer" containerID="ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.072105 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjvd"] Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.090290 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjvd"] Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.117847 4900 scope.go:117] "RemoveContainer" containerID="fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce" Dec 02 15:44:46 crc kubenswrapper[4900]: E1202 15:44:46.128594 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce\": container with ID starting with fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce not found: ID does not exist" containerID="fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.128677 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce"} err="failed to get container status \"fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce\": rpc error: code = NotFound desc = could not find container \"fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce\": container with ID starting with fdc25a7506b67ecd722e1222c78efc4a6807eb3ff66fa285c65211ae759327ce not found: ID does not exist" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.128704 4900 scope.go:117] "RemoveContainer" containerID="75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba" Dec 02 15:44:46 crc kubenswrapper[4900]: E1202 15:44:46.129203 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba\": container with ID starting with 75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba not found: ID does not exist" containerID="75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.129377 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba"} err="failed to get container status \"75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba\": rpc error: code = NotFound desc = could not find container \"75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba\": container with ID starting with 75ea5081cde20c595a754d15c23959e4963663405182b01ec7d19a774e657eba not found: ID does not exist" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.129553 4900 scope.go:117] "RemoveContainer" containerID="ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766" Dec 02 15:44:46 crc kubenswrapper[4900]: E1202 15:44:46.130228 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766\": container with ID starting with ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766 not found: ID does not exist" containerID="ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.130285 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766"} err="failed to get container status \"ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766\": rpc error: code = NotFound desc = could not find container \"ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766\": container with ID starting with ac0a109ca85b199c18ab370f40a0c663c5e983948857c3797a0393f292849766 not found: ID does not exist" Dec 02 15:44:46 crc kubenswrapper[4900]: I1202 15:44:46.924965 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" path="/var/lib/kubelet/pods/0be63e9d-c212-4b2c-a243-7f4d631f0168/volumes" Dec 02 15:44:52 crc kubenswrapper[4900]: I1202 15:44:52.911069 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:44:52 crc kubenswrapper[4900]: E1202 15:44:52.911877 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.162788 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv"] Dec 02 15:45:00 crc kubenswrapper[4900]: E1202 15:45:00.165677 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="registry-server" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.165829 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="registry-server" Dec 02 15:45:00 crc kubenswrapper[4900]: E1202 15:45:00.165933 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="extract-content" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.166021 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="extract-content" Dec 02 15:45:00 crc kubenswrapper[4900]: E1202 15:45:00.166163 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="extract-utilities" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.166278 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="extract-utilities" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.166714 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be63e9d-c212-4b2c-a243-7f4d631f0168" containerName="registry-server" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.167956 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.171164 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.173061 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.177871 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv"] Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.362379 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b0959d-7880-4c8a-b4c5-48373b46c779-config-volume\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.362753 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09b0959d-7880-4c8a-b4c5-48373b46c779-secret-volume\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.362953 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpkr\" (UniqueName: \"kubernetes.io/projected/09b0959d-7880-4c8a-b4c5-48373b46c779-kube-api-access-nvpkr\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.464950 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b0959d-7880-4c8a-b4c5-48373b46c779-config-volume\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.465078 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09b0959d-7880-4c8a-b4c5-48373b46c779-secret-volume\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.465164 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpkr\" (UniqueName: \"kubernetes.io/projected/09b0959d-7880-4c8a-b4c5-48373b46c779-kube-api-access-nvpkr\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.467098 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b0959d-7880-4c8a-b4c5-48373b46c779-config-volume\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.473730 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09b0959d-7880-4c8a-b4c5-48373b46c779-secret-volume\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.488958 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpkr\" (UniqueName: \"kubernetes.io/projected/09b0959d-7880-4c8a-b4c5-48373b46c779-kube-api-access-nvpkr\") pod \"collect-profiles-29411505-6bxbv\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:00 crc kubenswrapper[4900]: I1202 15:45:00.497182 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:01 crc kubenswrapper[4900]: I1202 15:45:01.013569 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv"] Dec 02 15:45:01 crc kubenswrapper[4900]: I1202 15:45:01.143727 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" event={"ID":"09b0959d-7880-4c8a-b4c5-48373b46c779","Type":"ContainerStarted","Data":"736c4f05624c4535b69f9cfb73be0b793b442459148ef58cdf057b367c45e9d3"} Dec 02 15:45:02 crc kubenswrapper[4900]: I1202 15:45:02.157347 4900 generic.go:334] "Generic (PLEG): container finished" podID="09b0959d-7880-4c8a-b4c5-48373b46c779" containerID="064b5e494c9117057043f9fd0c313e37cb87ff2aaf02d40a2a3cd892c6bf5eb1" exitCode=0 Dec 02 15:45:02 crc kubenswrapper[4900]: I1202 15:45:02.157721 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" event={"ID":"09b0959d-7880-4c8a-b4c5-48373b46c779","Type":"ContainerDied","Data":"064b5e494c9117057043f9fd0c313e37cb87ff2aaf02d40a2a3cd892c6bf5eb1"} Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.588293 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.774770 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvpkr\" (UniqueName: \"kubernetes.io/projected/09b0959d-7880-4c8a-b4c5-48373b46c779-kube-api-access-nvpkr\") pod \"09b0959d-7880-4c8a-b4c5-48373b46c779\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.774823 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09b0959d-7880-4c8a-b4c5-48373b46c779-secret-volume\") pod \"09b0959d-7880-4c8a-b4c5-48373b46c779\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.774864 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b0959d-7880-4c8a-b4c5-48373b46c779-config-volume\") pod \"09b0959d-7880-4c8a-b4c5-48373b46c779\" (UID: \"09b0959d-7880-4c8a-b4c5-48373b46c779\") " Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.776175 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09b0959d-7880-4c8a-b4c5-48373b46c779-config-volume" (OuterVolumeSpecName: "config-volume") pod "09b0959d-7880-4c8a-b4c5-48373b46c779" (UID: "09b0959d-7880-4c8a-b4c5-48373b46c779"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.781175 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b0959d-7880-4c8a-b4c5-48373b46c779-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "09b0959d-7880-4c8a-b4c5-48373b46c779" (UID: "09b0959d-7880-4c8a-b4c5-48373b46c779"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.785352 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b0959d-7880-4c8a-b4c5-48373b46c779-kube-api-access-nvpkr" (OuterVolumeSpecName: "kube-api-access-nvpkr") pod "09b0959d-7880-4c8a-b4c5-48373b46c779" (UID: "09b0959d-7880-4c8a-b4c5-48373b46c779"). InnerVolumeSpecName "kube-api-access-nvpkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.876933 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvpkr\" (UniqueName: \"kubernetes.io/projected/09b0959d-7880-4c8a-b4c5-48373b46c779-kube-api-access-nvpkr\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.876969 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09b0959d-7880-4c8a-b4c5-48373b46c779-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:03 crc kubenswrapper[4900]: I1202 15:45:03.876978 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09b0959d-7880-4c8a-b4c5-48373b46c779-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:04 crc kubenswrapper[4900]: I1202 15:45:04.192845 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" event={"ID":"09b0959d-7880-4c8a-b4c5-48373b46c779","Type":"ContainerDied","Data":"736c4f05624c4535b69f9cfb73be0b793b442459148ef58cdf057b367c45e9d3"} Dec 02 15:45:04 crc kubenswrapper[4900]: I1202 15:45:04.193232 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="736c4f05624c4535b69f9cfb73be0b793b442459148ef58cdf057b367c45e9d3" Dec 02 15:45:04 crc kubenswrapper[4900]: I1202 15:45:04.192932 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv" Dec 02 15:45:04 crc kubenswrapper[4900]: I1202 15:45:04.672815 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr"] Dec 02 15:45:04 crc kubenswrapper[4900]: I1202 15:45:04.685157 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-lnljr"] Dec 02 15:45:04 crc kubenswrapper[4900]: I1202 15:45:04.931438 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faaca8d7-2465-4690-992b-9b46293206c7" path="/var/lib/kubelet/pods/faaca8d7-2465-4690-992b-9b46293206c7/volumes" Dec 02 15:45:06 crc kubenswrapper[4900]: I1202 15:45:06.910852 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:45:06 crc kubenswrapper[4900]: E1202 15:45:06.911425 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:45:13 crc kubenswrapper[4900]: I1202 15:45:13.479021 4900 scope.go:117] "RemoveContainer" containerID="858b5396c8c7a21b72fc5cc14aad30a90ce312a02fe3e0309874e572b929fec9" Dec 02 15:45:20 crc kubenswrapper[4900]: I1202 15:45:20.910979 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:45:20 crc kubenswrapper[4900]: E1202 15:45:20.911922 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:45:33 crc kubenswrapper[4900]: I1202 15:45:33.910177 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:45:33 crc kubenswrapper[4900]: E1202 15:45:33.910915 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:45:40 crc kubenswrapper[4900]: I1202 15:45:40.597867 4900 generic.go:334] "Generic (PLEG): container finished" podID="2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" containerID="1b181eab9946ebe471cf22557dda3b6870b8d5f401492901890b46ec63f293c0" exitCode=0 Dec 02 15:45:40 crc kubenswrapper[4900]: I1202 15:45:40.598043 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" event={"ID":"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5","Type":"ContainerDied","Data":"1b181eab9946ebe471cf22557dda3b6870b8d5f401492901890b46ec63f293c0"} Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.126327 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.238671 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ceph\") pod \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.238779 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4p6f\" (UniqueName: \"kubernetes.io/projected/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-kube-api-access-z4p6f\") pod \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.238936 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-inventory\") pod \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.239080 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ssh-key\") pod \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\" (UID: \"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5\") " Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.247999 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ceph" (OuterVolumeSpecName: "ceph") pod "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" (UID: "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.253092 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-kube-api-access-z4p6f" (OuterVolumeSpecName: "kube-api-access-z4p6f") pod "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" (UID: "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5"). InnerVolumeSpecName "kube-api-access-z4p6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.284924 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" (UID: "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.297249 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-inventory" (OuterVolumeSpecName: "inventory") pod "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" (UID: "2cce9cbb-e921-4a36-9a5c-dd9d077c33a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.341327 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.341373 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4p6f\" (UniqueName: \"kubernetes.io/projected/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-kube-api-access-z4p6f\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.341389 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.341403 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cce9cbb-e921-4a36-9a5c-dd9d077c33a5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.622588 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" event={"ID":"2cce9cbb-e921-4a36-9a5c-dd9d077c33a5","Type":"ContainerDied","Data":"98db12ca01419e2546fff6ab013ec0a4d5d6eb20e4e7137320bd6ee9780a6f8d"} Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.622635 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98db12ca01419e2546fff6ab013ec0a4d5d6eb20e4e7137320bd6ee9780a6f8d" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.622670 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kfvbg" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.724355 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hdjlm"] Dec 02 15:45:42 crc kubenswrapper[4900]: E1202 15:45:42.725161 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b0959d-7880-4c8a-b4c5-48373b46c779" containerName="collect-profiles" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.725191 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b0959d-7880-4c8a-b4c5-48373b46c779" containerName="collect-profiles" Dec 02 15:45:42 crc kubenswrapper[4900]: E1202 15:45:42.725254 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" containerName="configure-network-openstack-openstack-cell1" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.725265 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" containerName="configure-network-openstack-openstack-cell1" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.725557 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cce9cbb-e921-4a36-9a5c-dd9d077c33a5" containerName="configure-network-openstack-openstack-cell1" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.725591 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b0959d-7880-4c8a-b4c5-48373b46c779" containerName="collect-profiles" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.726510 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.728792 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.729074 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.729375 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.736502 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.740669 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hdjlm"] Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.856358 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ceph\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.856417 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnb4\" (UniqueName: \"kubernetes.io/projected/103022c1-b49a-4eeb-80ea-464de2ca78dc-kube-api-access-7hnb4\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.856462 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ssh-key\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.856590 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-inventory\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.957981 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ceph\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.958321 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnb4\" (UniqueName: \"kubernetes.io/projected/103022c1-b49a-4eeb-80ea-464de2ca78dc-kube-api-access-7hnb4\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.958362 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ssh-key\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.958452 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-inventory\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.963148 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ceph\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.963517 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ssh-key\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.966844 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-inventory\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:42 crc kubenswrapper[4900]: I1202 15:45:42.978933 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnb4\" (UniqueName: \"kubernetes.io/projected/103022c1-b49a-4eeb-80ea-464de2ca78dc-kube-api-access-7hnb4\") pod \"validate-network-openstack-openstack-cell1-hdjlm\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:43 crc kubenswrapper[4900]: I1202 15:45:43.051964 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:43 crc kubenswrapper[4900]: I1202 15:45:43.652346 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hdjlm"] Dec 02 15:45:44 crc kubenswrapper[4900]: I1202 15:45:44.642466 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" event={"ID":"103022c1-b49a-4eeb-80ea-464de2ca78dc","Type":"ContainerStarted","Data":"4e7b8d16d1c3fed119ff0159f95dcc44b1ab28e6588fea1675ccb0501464124d"} Dec 02 15:45:44 crc kubenswrapper[4900]: I1202 15:45:44.643155 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" event={"ID":"103022c1-b49a-4eeb-80ea-464de2ca78dc","Type":"ContainerStarted","Data":"07b162dc4e9c0cd17ad70ce104a2be2b6d78f0e8988d54c90eb80700ff2ca0cf"} Dec 02 15:45:44 crc kubenswrapper[4900]: I1202 15:45:44.662190 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" podStartSLOduration=2.107711663 podStartE2EDuration="2.662168701s" podCreationTimestamp="2025-12-02 15:45:42 +0000 UTC" firstStartedPulling="2025-12-02 15:45:43.657910143 +0000 UTC m=+7389.073724004" lastFinishedPulling="2025-12-02 15:45:44.212367181 +0000 UTC m=+7389.628181042" observedRunningTime="2025-12-02 15:45:44.6574912 +0000 UTC m=+7390.073305061" watchObservedRunningTime="2025-12-02 15:45:44.662168701 +0000 UTC m=+7390.077982552" Dec 02 15:45:46 crc kubenswrapper[4900]: I1202 15:45:46.910531 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:45:47 crc kubenswrapper[4900]: I1202 15:45:47.682368 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"c8daa2c69c39f58108d67ad8d520f28a353f6ebb814b42d511c01c86f4488732"} Dec 02 15:45:49 crc kubenswrapper[4900]: I1202 15:45:49.708358 4900 generic.go:334] "Generic (PLEG): container finished" podID="103022c1-b49a-4eeb-80ea-464de2ca78dc" containerID="4e7b8d16d1c3fed119ff0159f95dcc44b1ab28e6588fea1675ccb0501464124d" exitCode=0 Dec 02 15:45:49 crc kubenswrapper[4900]: I1202 15:45:49.708480 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" event={"ID":"103022c1-b49a-4eeb-80ea-464de2ca78dc","Type":"ContainerDied","Data":"4e7b8d16d1c3fed119ff0159f95dcc44b1ab28e6588fea1675ccb0501464124d"} Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.123722 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.252202 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hnb4\" (UniqueName: \"kubernetes.io/projected/103022c1-b49a-4eeb-80ea-464de2ca78dc-kube-api-access-7hnb4\") pod \"103022c1-b49a-4eeb-80ea-464de2ca78dc\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.252345 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-inventory\") pod \"103022c1-b49a-4eeb-80ea-464de2ca78dc\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.252400 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ceph\") pod \"103022c1-b49a-4eeb-80ea-464de2ca78dc\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.252779 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ssh-key\") pod \"103022c1-b49a-4eeb-80ea-464de2ca78dc\" (UID: \"103022c1-b49a-4eeb-80ea-464de2ca78dc\") " Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.257921 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103022c1-b49a-4eeb-80ea-464de2ca78dc-kube-api-access-7hnb4" (OuterVolumeSpecName: "kube-api-access-7hnb4") pod "103022c1-b49a-4eeb-80ea-464de2ca78dc" (UID: "103022c1-b49a-4eeb-80ea-464de2ca78dc"). InnerVolumeSpecName "kube-api-access-7hnb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.277019 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ceph" (OuterVolumeSpecName: "ceph") pod "103022c1-b49a-4eeb-80ea-464de2ca78dc" (UID: "103022c1-b49a-4eeb-80ea-464de2ca78dc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.348265 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-inventory" (OuterVolumeSpecName: "inventory") pod "103022c1-b49a-4eeb-80ea-464de2ca78dc" (UID: "103022c1-b49a-4eeb-80ea-464de2ca78dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.354936 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hnb4\" (UniqueName: \"kubernetes.io/projected/103022c1-b49a-4eeb-80ea-464de2ca78dc-kube-api-access-7hnb4\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.354966 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.354974 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.355675 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "103022c1-b49a-4eeb-80ea-464de2ca78dc" (UID: "103022c1-b49a-4eeb-80ea-464de2ca78dc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.456511 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103022c1-b49a-4eeb-80ea-464de2ca78dc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.557293 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ftnmg"] Dec 02 15:45:51 crc kubenswrapper[4900]: E1202 15:45:51.558145 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103022c1-b49a-4eeb-80ea-464de2ca78dc" containerName="validate-network-openstack-openstack-cell1" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.558170 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="103022c1-b49a-4eeb-80ea-464de2ca78dc" containerName="validate-network-openstack-openstack-cell1" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.558458 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="103022c1-b49a-4eeb-80ea-464de2ca78dc" containerName="validate-network-openstack-openstack-cell1" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.561441 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.592918 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftnmg"] Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.662233 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvlr\" (UniqueName: \"kubernetes.io/projected/14ce31ad-5eb1-449a-9b78-48741f5a05fa-kube-api-access-5fvlr\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.662293 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14ce31ad-5eb1-449a-9b78-48741f5a05fa-utilities\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.662331 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14ce31ad-5eb1-449a-9b78-48741f5a05fa-catalog-content\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.728848 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" event={"ID":"103022c1-b49a-4eeb-80ea-464de2ca78dc","Type":"ContainerDied","Data":"07b162dc4e9c0cd17ad70ce104a2be2b6d78f0e8988d54c90eb80700ff2ca0cf"} Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.728896 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07b162dc4e9c0cd17ad70ce104a2be2b6d78f0e8988d54c90eb80700ff2ca0cf" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.728988 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hdjlm" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.764038 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvlr\" (UniqueName: \"kubernetes.io/projected/14ce31ad-5eb1-449a-9b78-48741f5a05fa-kube-api-access-5fvlr\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.764088 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14ce31ad-5eb1-449a-9b78-48741f5a05fa-utilities\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.764126 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14ce31ad-5eb1-449a-9b78-48741f5a05fa-catalog-content\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.764989 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14ce31ad-5eb1-449a-9b78-48741f5a05fa-catalog-content\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.765002 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14ce31ad-5eb1-449a-9b78-48741f5a05fa-utilities\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.785918 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvlr\" (UniqueName: \"kubernetes.io/projected/14ce31ad-5eb1-449a-9b78-48741f5a05fa-kube-api-access-5fvlr\") pod \"redhat-operators-ftnmg\" (UID: \"14ce31ad-5eb1-449a-9b78-48741f5a05fa\") " pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.819137 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-xc8qk"] Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.820935 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.823270 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.823345 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.823516 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.827596 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.840137 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-xc8qk"] Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.898623 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.971303 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ssh-key\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.971428 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ceph\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.971716 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpfn\" (UniqueName: \"kubernetes.io/projected/9da867a9-fdf9-413f-81d6-787700e0d41b-kube-api-access-nhpfn\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:51 crc kubenswrapper[4900]: I1202 15:45:51.973834 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.076298 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ssh-key\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.076731 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ceph\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.076790 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpfn\" (UniqueName: \"kubernetes.io/projected/9da867a9-fdf9-413f-81d6-787700e0d41b-kube-api-access-nhpfn\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.076852 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.081958 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ceph\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.093270 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ssh-key\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.094878 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.095615 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpfn\" (UniqueName: \"kubernetes.io/projected/9da867a9-fdf9-413f-81d6-787700e0d41b-kube-api-access-nhpfn\") pod \"install-os-openstack-openstack-cell1-xc8qk\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.141619 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.411778 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftnmg"] Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.740463 4900 generic.go:334] "Generic (PLEG): container finished" podID="14ce31ad-5eb1-449a-9b78-48741f5a05fa" containerID="dcd3ab241a755a013a2851c7b4fdce91cc6d20065d30e08c4b09fdf6e7ac9a22" exitCode=0 Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.740564 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftnmg" event={"ID":"14ce31ad-5eb1-449a-9b78-48741f5a05fa","Type":"ContainerDied","Data":"dcd3ab241a755a013a2851c7b4fdce91cc6d20065d30e08c4b09fdf6e7ac9a22"} Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.740859 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftnmg" event={"ID":"14ce31ad-5eb1-449a-9b78-48741f5a05fa","Type":"ContainerStarted","Data":"3c4f0e18a72bc92c1f9aa3bf4f47cd6800a0bbecebc43fb5b1d01be99ab51e3c"} Dec 02 15:45:52 crc kubenswrapper[4900]: W1202 15:45:52.791889 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9da867a9_fdf9_413f_81d6_787700e0d41b.slice/crio-8a2769d58b03996cd68c8d8fe7c2b14e828472ebd3e0758061328f942617322e WatchSource:0}: Error finding container 8a2769d58b03996cd68c8d8fe7c2b14e828472ebd3e0758061328f942617322e: Status 404 returned error can't find the container with id 8a2769d58b03996cd68c8d8fe7c2b14e828472ebd3e0758061328f942617322e Dec 02 15:45:52 crc kubenswrapper[4900]: I1202 15:45:52.797042 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-xc8qk"] Dec 02 15:45:53 crc kubenswrapper[4900]: I1202 15:45:53.756772 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" event={"ID":"9da867a9-fdf9-413f-81d6-787700e0d41b","Type":"ContainerStarted","Data":"88239c92285d3909f9bdc9e09547f21fee97627afdf5d61c5965958ae47a9eda"} Dec 02 15:45:53 crc kubenswrapper[4900]: I1202 15:45:53.757225 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" event={"ID":"9da867a9-fdf9-413f-81d6-787700e0d41b","Type":"ContainerStarted","Data":"8a2769d58b03996cd68c8d8fe7c2b14e828472ebd3e0758061328f942617322e"} Dec 02 15:45:53 crc kubenswrapper[4900]: I1202 15:45:53.774623 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" podStartSLOduration=2.298180866 podStartE2EDuration="2.774602387s" podCreationTimestamp="2025-12-02 15:45:51 +0000 UTC" firstStartedPulling="2025-12-02 15:45:52.79883853 +0000 UTC m=+7398.214652381" lastFinishedPulling="2025-12-02 15:45:53.275260041 +0000 UTC m=+7398.691073902" observedRunningTime="2025-12-02 15:45:53.77294442 +0000 UTC m=+7399.188758271" watchObservedRunningTime="2025-12-02 15:45:53.774602387 +0000 UTC m=+7399.190416248" Dec 02 15:46:03 crc kubenswrapper[4900]: I1202 15:46:03.866865 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftnmg" event={"ID":"14ce31ad-5eb1-449a-9b78-48741f5a05fa","Type":"ContainerStarted","Data":"c404f7b08079a41e3d5e3c4b3370af07179afa06b66f6ed18eace2b3fb668d33"} Dec 02 15:46:05 crc kubenswrapper[4900]: I1202 15:46:05.890798 4900 generic.go:334] "Generic (PLEG): container finished" podID="14ce31ad-5eb1-449a-9b78-48741f5a05fa" containerID="c404f7b08079a41e3d5e3c4b3370af07179afa06b66f6ed18eace2b3fb668d33" exitCode=0 Dec 02 15:46:05 crc kubenswrapper[4900]: I1202 15:46:05.890883 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftnmg" event={"ID":"14ce31ad-5eb1-449a-9b78-48741f5a05fa","Type":"ContainerDied","Data":"c404f7b08079a41e3d5e3c4b3370af07179afa06b66f6ed18eace2b3fb668d33"} Dec 02 15:46:06 crc kubenswrapper[4900]: I1202 15:46:06.905746 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftnmg" event={"ID":"14ce31ad-5eb1-449a-9b78-48741f5a05fa","Type":"ContainerStarted","Data":"03ad578651653623efc5669bfbaf719056cbd0cfd1c60ba4cbe7dc9dee829f4f"} Dec 02 15:46:06 crc kubenswrapper[4900]: I1202 15:46:06.942315 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ftnmg" podStartSLOduration=2.070926139 podStartE2EDuration="15.942291123s" podCreationTimestamp="2025-12-02 15:45:51 +0000 UTC" firstStartedPulling="2025-12-02 15:45:52.744568662 +0000 UTC m=+7398.160382513" lastFinishedPulling="2025-12-02 15:46:06.615933646 +0000 UTC m=+7412.031747497" observedRunningTime="2025-12-02 15:46:06.92690311 +0000 UTC m=+7412.342716961" watchObservedRunningTime="2025-12-02 15:46:06.942291123 +0000 UTC m=+7412.358104984" Dec 02 15:46:11 crc kubenswrapper[4900]: I1202 15:46:11.899059 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:46:11 crc kubenswrapper[4900]: I1202 15:46:11.899617 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:46:12 crc kubenswrapper[4900]: I1202 15:46:12.971520 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ftnmg" podUID="14ce31ad-5eb1-449a-9b78-48741f5a05fa" containerName="registry-server" probeResult="failure" output=< Dec 02 15:46:12 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 15:46:12 crc kubenswrapper[4900]: > Dec 02 15:46:21 crc kubenswrapper[4900]: I1202 15:46:21.974983 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.053952 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ftnmg" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.127391 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftnmg"] Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.229153 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.229457 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dww6z" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="registry-server" containerID="cri-o://3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee" gracePeriod=2 Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.694887 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.765188 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-utilities\") pod \"7aa847ae-bf9f-4727-aa6d-235721900502\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.765253 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-catalog-content\") pod \"7aa847ae-bf9f-4727-aa6d-235721900502\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.765358 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5j95\" (UniqueName: \"kubernetes.io/projected/7aa847ae-bf9f-4727-aa6d-235721900502-kube-api-access-m5j95\") pod \"7aa847ae-bf9f-4727-aa6d-235721900502\" (UID: \"7aa847ae-bf9f-4727-aa6d-235721900502\") " Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.766819 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-utilities" (OuterVolumeSpecName: "utilities") pod "7aa847ae-bf9f-4727-aa6d-235721900502" (UID: "7aa847ae-bf9f-4727-aa6d-235721900502"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.771113 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa847ae-bf9f-4727-aa6d-235721900502-kube-api-access-m5j95" (OuterVolumeSpecName: "kube-api-access-m5j95") pod "7aa847ae-bf9f-4727-aa6d-235721900502" (UID: "7aa847ae-bf9f-4727-aa6d-235721900502"). InnerVolumeSpecName "kube-api-access-m5j95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.867082 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.867111 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5j95\" (UniqueName: \"kubernetes.io/projected/7aa847ae-bf9f-4727-aa6d-235721900502-kube-api-access-m5j95\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.870787 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aa847ae-bf9f-4727-aa6d-235721900502" (UID: "7aa847ae-bf9f-4727-aa6d-235721900502"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:46:22 crc kubenswrapper[4900]: I1202 15:46:22.968248 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa847ae-bf9f-4727-aa6d-235721900502-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.144684 4900 generic.go:334] "Generic (PLEG): container finished" podID="7aa847ae-bf9f-4727-aa6d-235721900502" containerID="3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee" exitCode=0 Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.145867 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dww6z" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.146404 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dww6z" event={"ID":"7aa847ae-bf9f-4727-aa6d-235721900502","Type":"ContainerDied","Data":"3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee"} Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.146436 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dww6z" event={"ID":"7aa847ae-bf9f-4727-aa6d-235721900502","Type":"ContainerDied","Data":"bda28ea6c10d02ee0cf4500e3f55892846eccb44ceacb8583e2c505cb005e166"} Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.146454 4900 scope.go:117] "RemoveContainer" containerID="3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.173381 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.181228 4900 scope.go:117] "RemoveContainer" containerID="683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.184004 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dww6z"] Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.205038 4900 scope.go:117] "RemoveContainer" containerID="09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.254809 4900 scope.go:117] "RemoveContainer" containerID="3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee" Dec 02 15:46:23 crc kubenswrapper[4900]: E1202 15:46:23.255188 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee\": container with ID starting with 3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee not found: ID does not exist" containerID="3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.255233 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee"} err="failed to get container status \"3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee\": rpc error: code = NotFound desc = could not find container \"3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee\": container with ID starting with 3bbc77e6cc99112d7ca71c5cc4be79e7865459f49cf97110902d88308af61bee not found: ID does not exist" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.255260 4900 scope.go:117] "RemoveContainer" containerID="683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5" Dec 02 15:46:23 crc kubenswrapper[4900]: E1202 15:46:23.255518 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5\": container with ID starting with 683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5 not found: ID does not exist" containerID="683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.255542 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5"} err="failed to get container status \"683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5\": rpc error: code = NotFound desc = could not find container \"683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5\": container with ID starting with 683263b30304e60b8cca8b47740438b32d5e7a690d7dc7a24dc61cdf9cf006d5 not found: ID does not exist" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.255563 4900 scope.go:117] "RemoveContainer" containerID="09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9" Dec 02 15:46:23 crc kubenswrapper[4900]: E1202 15:46:23.255770 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9\": container with ID starting with 09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9 not found: ID does not exist" containerID="09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9" Dec 02 15:46:23 crc kubenswrapper[4900]: I1202 15:46:23.255793 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9"} err="failed to get container status \"09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9\": rpc error: code = NotFound desc = could not find container \"09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9\": container with ID starting with 09c3c0cff08d58e6ca4ac4e912e0146b8bc37ece20b6c00e9d2aa3f8f8d116b9 not found: ID does not exist" Dec 02 15:46:24 crc kubenswrapper[4900]: I1202 15:46:24.922332 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" path="/var/lib/kubelet/pods/7aa847ae-bf9f-4727-aa6d-235721900502/volumes" Dec 02 15:46:40 crc kubenswrapper[4900]: I1202 15:46:40.319029 4900 generic.go:334] "Generic (PLEG): container finished" podID="9da867a9-fdf9-413f-81d6-787700e0d41b" containerID="88239c92285d3909f9bdc9e09547f21fee97627afdf5d61c5965958ae47a9eda" exitCode=0 Dec 02 15:46:40 crc kubenswrapper[4900]: I1202 15:46:40.319142 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" event={"ID":"9da867a9-fdf9-413f-81d6-787700e0d41b","Type":"ContainerDied","Data":"88239c92285d3909f9bdc9e09547f21fee97627afdf5d61c5965958ae47a9eda"} Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.773359 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.896583 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ceph\") pod \"9da867a9-fdf9-413f-81d6-787700e0d41b\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.896868 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhpfn\" (UniqueName: \"kubernetes.io/projected/9da867a9-fdf9-413f-81d6-787700e0d41b-kube-api-access-nhpfn\") pod \"9da867a9-fdf9-413f-81d6-787700e0d41b\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.896965 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory\") pod \"9da867a9-fdf9-413f-81d6-787700e0d41b\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.897066 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ssh-key\") pod \"9da867a9-fdf9-413f-81d6-787700e0d41b\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.902915 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ceph" (OuterVolumeSpecName: "ceph") pod "9da867a9-fdf9-413f-81d6-787700e0d41b" (UID: "9da867a9-fdf9-413f-81d6-787700e0d41b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.903835 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da867a9-fdf9-413f-81d6-787700e0d41b-kube-api-access-nhpfn" (OuterVolumeSpecName: "kube-api-access-nhpfn") pod "9da867a9-fdf9-413f-81d6-787700e0d41b" (UID: "9da867a9-fdf9-413f-81d6-787700e0d41b"). InnerVolumeSpecName "kube-api-access-nhpfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:46:41 crc kubenswrapper[4900]: E1202 15:46:41.929155 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory podName:9da867a9-fdf9-413f-81d6-787700e0d41b nodeName:}" failed. No retries permitted until 2025-12-02 15:46:42.429124255 +0000 UTC m=+7447.844938116 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory") pod "9da867a9-fdf9-413f-81d6-787700e0d41b" (UID: "9da867a9-fdf9-413f-81d6-787700e0d41b") : error deleting /var/lib/kubelet/pods/9da867a9-fdf9-413f-81d6-787700e0d41b/volume-subpaths: remove /var/lib/kubelet/pods/9da867a9-fdf9-413f-81d6-787700e0d41b/volume-subpaths: no such file or directory Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.933161 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9da867a9-fdf9-413f-81d6-787700e0d41b" (UID: "9da867a9-fdf9-413f-81d6-787700e0d41b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:46:41 crc kubenswrapper[4900]: I1202 15:46:41.999766 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhpfn\" (UniqueName: \"kubernetes.io/projected/9da867a9-fdf9-413f-81d6-787700e0d41b-kube-api-access-nhpfn\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.000024 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.000451 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.341384 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.342844 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-xc8qk" event={"ID":"9da867a9-fdf9-413f-81d6-787700e0d41b","Type":"ContainerDied","Data":"8a2769d58b03996cd68c8d8fe7c2b14e828472ebd3e0758061328f942617322e"} Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.342910 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a2769d58b03996cd68c8d8fe7c2b14e828472ebd3e0758061328f942617322e" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.443096 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-lzpj4"] Dec 02 15:46:42 crc kubenswrapper[4900]: E1202 15:46:42.443624 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="extract-content" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.443663 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="extract-content" Dec 02 15:46:42 crc kubenswrapper[4900]: E1202 15:46:42.443678 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="extract-utilities" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.443688 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="extract-utilities" Dec 02 15:46:42 crc kubenswrapper[4900]: E1202 15:46:42.443743 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da867a9-fdf9-413f-81d6-787700e0d41b" containerName="install-os-openstack-openstack-cell1" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.443752 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da867a9-fdf9-413f-81d6-787700e0d41b" containerName="install-os-openstack-openstack-cell1" Dec 02 15:46:42 crc kubenswrapper[4900]: E1202 15:46:42.443774 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="registry-server" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.443781 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="registry-server" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.444048 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da867a9-fdf9-413f-81d6-787700e0d41b" containerName="install-os-openstack-openstack-cell1" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.444068 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa847ae-bf9f-4727-aa6d-235721900502" containerName="registry-server" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.445058 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.464818 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-lzpj4"] Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.510447 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory\") pod \"9da867a9-fdf9-413f-81d6-787700e0d41b\" (UID: \"9da867a9-fdf9-413f-81d6-787700e0d41b\") " Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.516880 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory" (OuterVolumeSpecName: "inventory") pod "9da867a9-fdf9-413f-81d6-787700e0d41b" (UID: "9da867a9-fdf9-413f-81d6-787700e0d41b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.612839 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-inventory\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.612895 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ssh-key\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.612918 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4lp\" (UniqueName: \"kubernetes.io/projected/655dba29-218e-4558-b61e-127bab45af83-kube-api-access-2q4lp\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.613033 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ceph\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.613155 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9da867a9-fdf9-413f-81d6-787700e0d41b-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.715831 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ceph\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.716052 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-inventory\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.716096 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ssh-key\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.716127 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4lp\" (UniqueName: \"kubernetes.io/projected/655dba29-218e-4558-b61e-127bab45af83-kube-api-access-2q4lp\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.720298 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ceph\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.722801 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-inventory\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.736290 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ssh-key\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.737282 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4lp\" (UniqueName: \"kubernetes.io/projected/655dba29-218e-4558-b61e-127bab45af83-kube-api-access-2q4lp\") pod \"configure-os-openstack-openstack-cell1-lzpj4\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:42 crc kubenswrapper[4900]: I1202 15:46:42.881351 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:46:43 crc kubenswrapper[4900]: I1202 15:46:43.526929 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-lzpj4"] Dec 02 15:46:44 crc kubenswrapper[4900]: I1202 15:46:44.366680 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" event={"ID":"655dba29-218e-4558-b61e-127bab45af83","Type":"ContainerStarted","Data":"e18f0b6dd6b12bde94f1fb1d2d7499f869783b3a9fa1305152b1ebf434dcdcba"} Dec 02 15:46:45 crc kubenswrapper[4900]: I1202 15:46:45.379120 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" event={"ID":"655dba29-218e-4558-b61e-127bab45af83","Type":"ContainerStarted","Data":"1d96465540fa694f199364ff530e4d3aa8b110324e701685a1888420627d4148"} Dec 02 15:46:45 crc kubenswrapper[4900]: I1202 15:46:45.406531 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" podStartSLOduration=2.309894461 podStartE2EDuration="3.40651535s" podCreationTimestamp="2025-12-02 15:46:42 +0000 UTC" firstStartedPulling="2025-12-02 15:46:43.522315412 +0000 UTC m=+7448.938129303" lastFinishedPulling="2025-12-02 15:46:44.618936341 +0000 UTC m=+7450.034750192" observedRunningTime="2025-12-02 15:46:45.402189558 +0000 UTC m=+7450.818003419" watchObservedRunningTime="2025-12-02 15:46:45.40651535 +0000 UTC m=+7450.822329211" Dec 02 15:47:29 crc kubenswrapper[4900]: I1202 15:47:29.947324 4900 generic.go:334] "Generic (PLEG): container finished" podID="655dba29-218e-4558-b61e-127bab45af83" containerID="1d96465540fa694f199364ff530e4d3aa8b110324e701685a1888420627d4148" exitCode=0 Dec 02 15:47:29 crc kubenswrapper[4900]: I1202 15:47:29.947428 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" event={"ID":"655dba29-218e-4558-b61e-127bab45af83","Type":"ContainerDied","Data":"1d96465540fa694f199364ff530e4d3aa8b110324e701685a1888420627d4148"} Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.413177 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.552329 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ceph\") pod \"655dba29-218e-4558-b61e-127bab45af83\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.552487 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ssh-key\") pod \"655dba29-218e-4558-b61e-127bab45af83\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.552750 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-inventory\") pod \"655dba29-218e-4558-b61e-127bab45af83\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.552804 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q4lp\" (UniqueName: \"kubernetes.io/projected/655dba29-218e-4558-b61e-127bab45af83-kube-api-access-2q4lp\") pod \"655dba29-218e-4558-b61e-127bab45af83\" (UID: \"655dba29-218e-4558-b61e-127bab45af83\") " Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.557884 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655dba29-218e-4558-b61e-127bab45af83-kube-api-access-2q4lp" (OuterVolumeSpecName: "kube-api-access-2q4lp") pod "655dba29-218e-4558-b61e-127bab45af83" (UID: "655dba29-218e-4558-b61e-127bab45af83"). InnerVolumeSpecName "kube-api-access-2q4lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.559031 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ceph" (OuterVolumeSpecName: "ceph") pod "655dba29-218e-4558-b61e-127bab45af83" (UID: "655dba29-218e-4558-b61e-127bab45af83"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.607161 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "655dba29-218e-4558-b61e-127bab45af83" (UID: "655dba29-218e-4558-b61e-127bab45af83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.612819 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-inventory" (OuterVolumeSpecName: "inventory") pod "655dba29-218e-4558-b61e-127bab45af83" (UID: "655dba29-218e-4558-b61e-127bab45af83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.655711 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.655995 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.656005 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q4lp\" (UniqueName: \"kubernetes.io/projected/655dba29-218e-4558-b61e-127bab45af83-kube-api-access-2q4lp\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.656015 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/655dba29-218e-4558-b61e-127bab45af83-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.968211 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" event={"ID":"655dba29-218e-4558-b61e-127bab45af83","Type":"ContainerDied","Data":"e18f0b6dd6b12bde94f1fb1d2d7499f869783b3a9fa1305152b1ebf434dcdcba"} Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.968256 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18f0b6dd6b12bde94f1fb1d2d7499f869783b3a9fa1305152b1ebf434dcdcba" Dec 02 15:47:31 crc kubenswrapper[4900]: I1202 15:47:31.968278 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lzpj4" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.077270 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-dxvcd"] Dec 02 15:47:32 crc kubenswrapper[4900]: E1202 15:47:32.077854 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655dba29-218e-4558-b61e-127bab45af83" containerName="configure-os-openstack-openstack-cell1" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.077876 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="655dba29-218e-4558-b61e-127bab45af83" containerName="configure-os-openstack-openstack-cell1" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.078139 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="655dba29-218e-4558-b61e-127bab45af83" containerName="configure-os-openstack-openstack-cell1" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.080079 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.087104 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.087315 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.087429 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.087597 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.093953 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-dxvcd"] Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.176152 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ceph\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.176274 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.176314 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-inventory-0\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.176348 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbv5\" (UniqueName: \"kubernetes.io/projected/63024711-d35e-4023-82d6-66e310453c12-kube-api-access-vxbv5\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.278502 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.278605 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-inventory-0\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.278680 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbv5\" (UniqueName: \"kubernetes.io/projected/63024711-d35e-4023-82d6-66e310453c12-kube-api-access-vxbv5\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.278852 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ceph\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.283786 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ceph\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.286990 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-inventory-0\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.294892 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.295441 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbv5\" (UniqueName: \"kubernetes.io/projected/63024711-d35e-4023-82d6-66e310453c12-kube-api-access-vxbv5\") pod \"ssh-known-hosts-openstack-dxvcd\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:32 crc kubenswrapper[4900]: I1202 15:47:32.409865 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:33 crc kubenswrapper[4900]: I1202 15:47:33.023625 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-dxvcd"] Dec 02 15:47:33 crc kubenswrapper[4900]: I1202 15:47:33.994063 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dxvcd" event={"ID":"63024711-d35e-4023-82d6-66e310453c12","Type":"ContainerStarted","Data":"8a4f0a0918740214bd81227d905f214085fb7ea6e116f82a838f6a275dc89538"} Dec 02 15:47:33 crc kubenswrapper[4900]: I1202 15:47:33.994658 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dxvcd" event={"ID":"63024711-d35e-4023-82d6-66e310453c12","Type":"ContainerStarted","Data":"d938910f532ca954924ee6c8b1c8ffb8209ae1d75369e9180c8da32b16b6fd84"} Dec 02 15:47:34 crc kubenswrapper[4900]: I1202 15:47:34.020145 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-dxvcd" podStartSLOduration=1.556996055 podStartE2EDuration="2.020125432s" podCreationTimestamp="2025-12-02 15:47:32 +0000 UTC" firstStartedPulling="2025-12-02 15:47:33.027907352 +0000 UTC m=+7498.443721223" lastFinishedPulling="2025-12-02 15:47:33.491036729 +0000 UTC m=+7498.906850600" observedRunningTime="2025-12-02 15:47:34.017426066 +0000 UTC m=+7499.433239917" watchObservedRunningTime="2025-12-02 15:47:34.020125432 +0000 UTC m=+7499.435939293" Dec 02 15:47:43 crc kubenswrapper[4900]: I1202 15:47:43.098009 4900 generic.go:334] "Generic (PLEG): container finished" podID="63024711-d35e-4023-82d6-66e310453c12" containerID="8a4f0a0918740214bd81227d905f214085fb7ea6e116f82a838f6a275dc89538" exitCode=0 Dec 02 15:47:43 crc kubenswrapper[4900]: I1202 15:47:43.098113 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dxvcd" event={"ID":"63024711-d35e-4023-82d6-66e310453c12","Type":"ContainerDied","Data":"8a4f0a0918740214bd81227d905f214085fb7ea6e116f82a838f6a275dc89538"} Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.637060 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.781729 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ceph\") pod \"63024711-d35e-4023-82d6-66e310453c12\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.781956 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-inventory-0\") pod \"63024711-d35e-4023-82d6-66e310453c12\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.782042 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1\") pod \"63024711-d35e-4023-82d6-66e310453c12\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.782107 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbv5\" (UniqueName: \"kubernetes.io/projected/63024711-d35e-4023-82d6-66e310453c12-kube-api-access-vxbv5\") pod \"63024711-d35e-4023-82d6-66e310453c12\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.787988 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63024711-d35e-4023-82d6-66e310453c12-kube-api-access-vxbv5" (OuterVolumeSpecName: "kube-api-access-vxbv5") pod "63024711-d35e-4023-82d6-66e310453c12" (UID: "63024711-d35e-4023-82d6-66e310453c12"). InnerVolumeSpecName "kube-api-access-vxbv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.788429 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ceph" (OuterVolumeSpecName: "ceph") pod "63024711-d35e-4023-82d6-66e310453c12" (UID: "63024711-d35e-4023-82d6-66e310453c12"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:44 crc kubenswrapper[4900]: E1202 15:47:44.815466 4900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1 podName:63024711-d35e-4023-82d6-66e310453c12 nodeName:}" failed. No retries permitted until 2025-12-02 15:47:45.315379626 +0000 UTC m=+7510.731193477 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-cell1" (UniqueName: "kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1") pod "63024711-d35e-4023-82d6-66e310453c12" (UID: "63024711-d35e-4023-82d6-66e310453c12") : error deleting /var/lib/kubelet/pods/63024711-d35e-4023-82d6-66e310453c12/volume-subpaths: remove /var/lib/kubelet/pods/63024711-d35e-4023-82d6-66e310453c12/volume-subpaths: no such file or directory Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.821551 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "63024711-d35e-4023-82d6-66e310453c12" (UID: "63024711-d35e-4023-82d6-66e310453c12"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.884979 4900 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.885013 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbv5\" (UniqueName: \"kubernetes.io/projected/63024711-d35e-4023-82d6-66e310453c12-kube-api-access-vxbv5\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:44 crc kubenswrapper[4900]: I1202 15:47:44.885024 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.123316 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-dxvcd" event={"ID":"63024711-d35e-4023-82d6-66e310453c12","Type":"ContainerDied","Data":"d938910f532ca954924ee6c8b1c8ffb8209ae1d75369e9180c8da32b16b6fd84"} Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.123372 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d938910f532ca954924ee6c8b1c8ffb8209ae1d75369e9180c8da32b16b6fd84" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.123484 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-dxvcd" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.211060 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-4fnz6"] Dec 02 15:47:45 crc kubenswrapper[4900]: E1202 15:47:45.211523 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63024711-d35e-4023-82d6-66e310453c12" containerName="ssh-known-hosts-openstack" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.211541 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="63024711-d35e-4023-82d6-66e310453c12" containerName="ssh-known-hosts-openstack" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.211751 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="63024711-d35e-4023-82d6-66e310453c12" containerName="ssh-known-hosts-openstack" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.212452 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.249065 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-4fnz6"] Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.293229 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcpqg\" (UniqueName: \"kubernetes.io/projected/0f3377fb-c012-4e57-b165-5b9848f46ac1-kube-api-access-zcpqg\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.293338 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-inventory\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.293371 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ceph\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.293484 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ssh-key\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.395153 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1\") pod \"63024711-d35e-4023-82d6-66e310453c12\" (UID: \"63024711-d35e-4023-82d6-66e310453c12\") " Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.395842 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ssh-key\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.395936 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcpqg\" (UniqueName: \"kubernetes.io/projected/0f3377fb-c012-4e57-b165-5b9848f46ac1-kube-api-access-zcpqg\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.396070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-inventory\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.396293 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ceph\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.401371 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ssh-key\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.402364 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "63024711-d35e-4023-82d6-66e310453c12" (UID: "63024711-d35e-4023-82d6-66e310453c12"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.411333 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-inventory\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.414012 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ceph\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.423740 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcpqg\" (UniqueName: \"kubernetes.io/projected/0f3377fb-c012-4e57-b165-5b9848f46ac1-kube-api-access-zcpqg\") pod \"run-os-openstack-openstack-cell1-4fnz6\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.498936 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/63024711-d35e-4023-82d6-66e310453c12-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:45 crc kubenswrapper[4900]: I1202 15:47:45.540443 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:46 crc kubenswrapper[4900]: I1202 15:47:46.194943 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-4fnz6"] Dec 02 15:47:46 crc kubenswrapper[4900]: I1202 15:47:46.202394 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:47:47 crc kubenswrapper[4900]: I1202 15:47:47.144481 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" event={"ID":"0f3377fb-c012-4e57-b165-5b9848f46ac1","Type":"ContainerStarted","Data":"6cad2fe43788c20ef484a317b4d15fc5d0b1094deb3d72971c73f30ec2fe7da6"} Dec 02 15:47:48 crc kubenswrapper[4900]: I1202 15:47:48.156189 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" event={"ID":"0f3377fb-c012-4e57-b165-5b9848f46ac1","Type":"ContainerStarted","Data":"c06b68ee8ab074358eb2d5ad487dddad36025b5b0e673cd4dfd589debd8d5a31"} Dec 02 15:47:48 crc kubenswrapper[4900]: I1202 15:47:48.181128 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" podStartSLOduration=2.078510212 podStartE2EDuration="3.181111038s" podCreationTimestamp="2025-12-02 15:47:45 +0000 UTC" firstStartedPulling="2025-12-02 15:47:46.201884816 +0000 UTC m=+7511.617698697" lastFinishedPulling="2025-12-02 15:47:47.304485672 +0000 UTC m=+7512.720299523" observedRunningTime="2025-12-02 15:47:48.170772797 +0000 UTC m=+7513.586586648" watchObservedRunningTime="2025-12-02 15:47:48.181111038 +0000 UTC m=+7513.596924889" Dec 02 15:47:55 crc kubenswrapper[4900]: I1202 15:47:55.230843 4900 generic.go:334] "Generic (PLEG): container finished" podID="0f3377fb-c012-4e57-b165-5b9848f46ac1" containerID="c06b68ee8ab074358eb2d5ad487dddad36025b5b0e673cd4dfd589debd8d5a31" exitCode=0 Dec 02 15:47:55 crc kubenswrapper[4900]: I1202 15:47:55.230929 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" event={"ID":"0f3377fb-c012-4e57-b165-5b9848f46ac1","Type":"ContainerDied","Data":"c06b68ee8ab074358eb2d5ad487dddad36025b5b0e673cd4dfd589debd8d5a31"} Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.745344 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.870511 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcpqg\" (UniqueName: \"kubernetes.io/projected/0f3377fb-c012-4e57-b165-5b9848f46ac1-kube-api-access-zcpqg\") pod \"0f3377fb-c012-4e57-b165-5b9848f46ac1\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.870664 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-inventory\") pod \"0f3377fb-c012-4e57-b165-5b9848f46ac1\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.870740 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ceph\") pod \"0f3377fb-c012-4e57-b165-5b9848f46ac1\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.870840 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ssh-key\") pod \"0f3377fb-c012-4e57-b165-5b9848f46ac1\" (UID: \"0f3377fb-c012-4e57-b165-5b9848f46ac1\") " Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.875607 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ceph" (OuterVolumeSpecName: "ceph") pod "0f3377fb-c012-4e57-b165-5b9848f46ac1" (UID: "0f3377fb-c012-4e57-b165-5b9848f46ac1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.876151 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3377fb-c012-4e57-b165-5b9848f46ac1-kube-api-access-zcpqg" (OuterVolumeSpecName: "kube-api-access-zcpqg") pod "0f3377fb-c012-4e57-b165-5b9848f46ac1" (UID: "0f3377fb-c012-4e57-b165-5b9848f46ac1"). InnerVolumeSpecName "kube-api-access-zcpqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.898965 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0f3377fb-c012-4e57-b165-5b9848f46ac1" (UID: "0f3377fb-c012-4e57-b165-5b9848f46ac1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.901463 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-inventory" (OuterVolumeSpecName: "inventory") pod "0f3377fb-c012-4e57-b165-5b9848f46ac1" (UID: "0f3377fb-c012-4e57-b165-5b9848f46ac1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.973344 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcpqg\" (UniqueName: \"kubernetes.io/projected/0f3377fb-c012-4e57-b165-5b9848f46ac1-kube-api-access-zcpqg\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.973387 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.973403 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:56 crc kubenswrapper[4900]: I1202 15:47:56.973416 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f3377fb-c012-4e57-b165-5b9848f46ac1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.257706 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" event={"ID":"0f3377fb-c012-4e57-b165-5b9848f46ac1","Type":"ContainerDied","Data":"6cad2fe43788c20ef484a317b4d15fc5d0b1094deb3d72971c73f30ec2fe7da6"} Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.257745 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cad2fe43788c20ef484a317b4d15fc5d0b1094deb3d72971c73f30ec2fe7da6" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.257771 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-4fnz6" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.351945 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-d6qss"] Dec 02 15:47:57 crc kubenswrapper[4900]: E1202 15:47:57.352423 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3377fb-c012-4e57-b165-5b9848f46ac1" containerName="run-os-openstack-openstack-cell1" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.352440 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3377fb-c012-4e57-b165-5b9848f46ac1" containerName="run-os-openstack-openstack-cell1" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.352619 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3377fb-c012-4e57-b165-5b9848f46ac1" containerName="run-os-openstack-openstack-cell1" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.353385 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.356577 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.356600 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.356987 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.357969 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.369816 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-d6qss"] Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.484021 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49z7\" (UniqueName: \"kubernetes.io/projected/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-kube-api-access-b49z7\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.484412 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.484509 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ceph\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.484592 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-inventory\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.586578 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49z7\" (UniqueName: \"kubernetes.io/projected/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-kube-api-access-b49z7\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.586634 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.586698 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ceph\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.586746 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-inventory\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.592307 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-inventory\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.593606 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.593807 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ceph\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.604307 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49z7\" (UniqueName: \"kubernetes.io/projected/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-kube-api-access-b49z7\") pod \"reboot-os-openstack-openstack-cell1-d6qss\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:57 crc kubenswrapper[4900]: I1202 15:47:57.720141 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:47:58 crc kubenswrapper[4900]: I1202 15:47:58.274818 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-d6qss"] Dec 02 15:47:59 crc kubenswrapper[4900]: I1202 15:47:59.280590 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" event={"ID":"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b","Type":"ContainerStarted","Data":"e3ce076e6b9c2c0d0ec15fff812136c2934b9641edb5e621915d7a350c009170"} Dec 02 15:47:59 crc kubenswrapper[4900]: I1202 15:47:59.281986 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" event={"ID":"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b","Type":"ContainerStarted","Data":"60b0bc2ef2175629f44ff50613ed423f354a3c5ba1fc25c085d213f9a58581f5"} Dec 02 15:47:59 crc kubenswrapper[4900]: I1202 15:47:59.301950 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" podStartSLOduration=1.6839153310000001 podStartE2EDuration="2.301927527s" podCreationTimestamp="2025-12-02 15:47:57 +0000 UTC" firstStartedPulling="2025-12-02 15:47:58.277680816 +0000 UTC m=+7523.693494667" lastFinishedPulling="2025-12-02 15:47:58.895693012 +0000 UTC m=+7524.311506863" observedRunningTime="2025-12-02 15:47:59.296451353 +0000 UTC m=+7524.712265204" watchObservedRunningTime="2025-12-02 15:47:59.301927527 +0000 UTC m=+7524.717741378" Dec 02 15:48:15 crc kubenswrapper[4900]: I1202 15:48:15.116962 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:48:15 crc kubenswrapper[4900]: I1202 15:48:15.117717 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:48:15 crc kubenswrapper[4900]: I1202 15:48:15.468688 4900 generic.go:334] "Generic (PLEG): container finished" podID="eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" containerID="e3ce076e6b9c2c0d0ec15fff812136c2934b9641edb5e621915d7a350c009170" exitCode=0 Dec 02 15:48:15 crc kubenswrapper[4900]: I1202 15:48:15.468744 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" event={"ID":"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b","Type":"ContainerDied","Data":"e3ce076e6b9c2c0d0ec15fff812136c2934b9641edb5e621915d7a350c009170"} Dec 02 15:48:16 crc kubenswrapper[4900]: I1202 15:48:16.976807 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.070594 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ceph\") pod \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.070662 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ssh-key\") pod \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.070740 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b49z7\" (UniqueName: \"kubernetes.io/projected/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-kube-api-access-b49z7\") pod \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.070802 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-inventory\") pod \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\" (UID: \"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b\") " Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.078235 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-kube-api-access-b49z7" (OuterVolumeSpecName: "kube-api-access-b49z7") pod "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" (UID: "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b"). InnerVolumeSpecName "kube-api-access-b49z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.081814 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ceph" (OuterVolumeSpecName: "ceph") pod "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" (UID: "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.099884 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-inventory" (OuterVolumeSpecName: "inventory") pod "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" (UID: "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.100707 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" (UID: "eaac2118-0f22-4b6f-a877-b5cd1ff3d60b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.174819 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.174846 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.174858 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b49z7\" (UniqueName: \"kubernetes.io/projected/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-kube-api-access-b49z7\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.174867 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaac2118-0f22-4b6f-a877-b5cd1ff3d60b-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.506783 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.510853 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-d6qss" event={"ID":"eaac2118-0f22-4b6f-a877-b5cd1ff3d60b","Type":"ContainerDied","Data":"60b0bc2ef2175629f44ff50613ed423f354a3c5ba1fc25c085d213f9a58581f5"} Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.515417 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60b0bc2ef2175629f44ff50613ed423f354a3c5ba1fc25c085d213f9a58581f5" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.643950 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-fj8fw"] Dec 02 15:48:17 crc kubenswrapper[4900]: E1202 15:48:17.644575 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" containerName="reboot-os-openstack-openstack-cell1" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.644605 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" containerName="reboot-os-openstack-openstack-cell1" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.645361 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaac2118-0f22-4b6f-a877-b5cd1ff3d60b" containerName="reboot-os-openstack-openstack-cell1" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.646886 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.654694 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.655101 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.656974 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.657001 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.661347 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-fj8fw"] Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.789490 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.789698 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzzz\" (UniqueName: \"kubernetes.io/projected/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-kube-api-access-9lzzz\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790230 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790386 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790504 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ceph\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790538 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-inventory\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790702 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ssh-key\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790771 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790915 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.790973 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.791137 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.791309 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.893561 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.893699 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lzzz\" (UniqueName: \"kubernetes.io/projected/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-kube-api-access-9lzzz\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.893743 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.893798 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.893862 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ceph\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.893938 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-inventory\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.894004 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ssh-key\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.894055 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.894124 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.894196 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.894310 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.894432 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.900894 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.901637 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ssh-key\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.902806 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.903257 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.904433 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.906132 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-inventory\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.906884 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ceph\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.906982 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.907816 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.909139 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.909159 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.928363 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lzzz\" (UniqueName: \"kubernetes.io/projected/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-kube-api-access-9lzzz\") pod \"install-certs-openstack-openstack-cell1-fj8fw\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:17 crc kubenswrapper[4900]: I1202 15:48:17.970197 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:18 crc kubenswrapper[4900]: I1202 15:48:18.616484 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-fj8fw"] Dec 02 15:48:19 crc kubenswrapper[4900]: I1202 15:48:19.541273 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" event={"ID":"b6f1eb22-77a6-4faf-a13f-a7e9d060360c","Type":"ContainerStarted","Data":"d7969e7077d822ba04592ffcb49a06ace470efb2660af9fcc30566bedb32030b"} Dec 02 15:48:19 crc kubenswrapper[4900]: I1202 15:48:19.576213 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" podStartSLOduration=1.979299373 podStartE2EDuration="2.576186165s" podCreationTimestamp="2025-12-02 15:48:17 +0000 UTC" firstStartedPulling="2025-12-02 15:48:18.629120476 +0000 UTC m=+7544.044934367" lastFinishedPulling="2025-12-02 15:48:19.226007298 +0000 UTC m=+7544.641821159" observedRunningTime="2025-12-02 15:48:19.567897342 +0000 UTC m=+7544.983711223" watchObservedRunningTime="2025-12-02 15:48:19.576186165 +0000 UTC m=+7544.992000036" Dec 02 15:48:20 crc kubenswrapper[4900]: I1202 15:48:20.550703 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" event={"ID":"b6f1eb22-77a6-4faf-a13f-a7e9d060360c","Type":"ContainerStarted","Data":"5c8fa5e0d1981b87f1e26edbfff05642acfcc42a23992cce8964ba1281387805"} Dec 02 15:48:37 crc kubenswrapper[4900]: I1202 15:48:37.864249 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nq2vs"] Dec 02 15:48:37 crc kubenswrapper[4900]: I1202 15:48:37.867732 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:37 crc kubenswrapper[4900]: I1202 15:48:37.878936 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq2vs"] Dec 02 15:48:37 crc kubenswrapper[4900]: I1202 15:48:37.972670 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvq8\" (UniqueName: \"kubernetes.io/projected/60ead230-c0f8-4966-8403-3a691c7b741e-kube-api-access-6kvq8\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:37 crc kubenswrapper[4900]: I1202 15:48:37.972729 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-catalog-content\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:37 crc kubenswrapper[4900]: I1202 15:48:37.973116 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-utilities\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.075735 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-utilities\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.075975 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvq8\" (UniqueName: \"kubernetes.io/projected/60ead230-c0f8-4966-8403-3a691c7b741e-kube-api-access-6kvq8\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.076006 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-catalog-content\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.076697 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-catalog-content\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.077026 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-utilities\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.111220 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvq8\" (UniqueName: \"kubernetes.io/projected/60ead230-c0f8-4966-8403-3a691c7b741e-kube-api-access-6kvq8\") pod \"certified-operators-nq2vs\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.202479 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.723443 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq2vs"] Dec 02 15:48:38 crc kubenswrapper[4900]: I1202 15:48:38.767843 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerStarted","Data":"6d7a64b5cee305d66a89f9ed9e5eb22aec4d9790d62377edaaf25a7e220f0d6f"} Dec 02 15:48:39 crc kubenswrapper[4900]: I1202 15:48:39.779778 4900 generic.go:334] "Generic (PLEG): container finished" podID="60ead230-c0f8-4966-8403-3a691c7b741e" containerID="ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827" exitCode=0 Dec 02 15:48:39 crc kubenswrapper[4900]: I1202 15:48:39.780049 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerDied","Data":"ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827"} Dec 02 15:48:39 crc kubenswrapper[4900]: I1202 15:48:39.784502 4900 generic.go:334] "Generic (PLEG): container finished" podID="b6f1eb22-77a6-4faf-a13f-a7e9d060360c" containerID="5c8fa5e0d1981b87f1e26edbfff05642acfcc42a23992cce8964ba1281387805" exitCode=0 Dec 02 15:48:39 crc kubenswrapper[4900]: I1202 15:48:39.784552 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" event={"ID":"b6f1eb22-77a6-4faf-a13f-a7e9d060360c","Type":"ContainerDied","Data":"5c8fa5e0d1981b87f1e26edbfff05642acfcc42a23992cce8964ba1281387805"} Dec 02 15:48:40 crc kubenswrapper[4900]: I1202 15:48:40.801851 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerStarted","Data":"0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1"} Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.411246 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461315 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-telemetry-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461396 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-sriov-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461457 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ceph\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461551 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lzzz\" (UniqueName: \"kubernetes.io/projected/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-kube-api-access-9lzzz\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461665 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-metadata-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461692 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-nova-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461737 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-bootstrap-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461821 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-inventory\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.461981 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-dhcp-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.462056 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ovn-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.462103 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ssh-key\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.462143 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-libvirt-combined-ca-bundle\") pod \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\" (UID: \"b6f1eb22-77a6-4faf-a13f-a7e9d060360c\") " Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.469724 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.469752 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.470311 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.470946 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.472066 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.473945 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-kube-api-access-9lzzz" (OuterVolumeSpecName: "kube-api-access-9lzzz") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "kube-api-access-9lzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.474903 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.475821 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.482039 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.482129 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ceph" (OuterVolumeSpecName: "ceph") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.503021 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-inventory" (OuterVolumeSpecName: "inventory") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.505689 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6f1eb22-77a6-4faf-a13f-a7e9d060360c" (UID: "b6f1eb22-77a6-4faf-a13f-a7e9d060360c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565197 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565292 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565306 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565498 4900 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565507 4900 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565515 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565525 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565534 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lzzz\" (UniqueName: \"kubernetes.io/projected/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-kube-api-access-9lzzz\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565543 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565554 4900 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565565 4900 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.565573 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6f1eb22-77a6-4faf-a13f-a7e9d060360c-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.819809 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.819817 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-fj8fw" event={"ID":"b6f1eb22-77a6-4faf-a13f-a7e9d060360c","Type":"ContainerDied","Data":"d7969e7077d822ba04592ffcb49a06ace470efb2660af9fcc30566bedb32030b"} Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.819917 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7969e7077d822ba04592ffcb49a06ace470efb2660af9fcc30566bedb32030b" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.823795 4900 generic.go:334] "Generic (PLEG): container finished" podID="60ead230-c0f8-4966-8403-3a691c7b741e" containerID="0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1" exitCode=0 Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.823835 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerDied","Data":"0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1"} Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.939423 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-qvzjb"] Dec 02 15:48:41 crc kubenswrapper[4900]: E1202 15:48:41.940732 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6f1eb22-77a6-4faf-a13f-a7e9d060360c" containerName="install-certs-openstack-openstack-cell1" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.940769 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6f1eb22-77a6-4faf-a13f-a7e9d060360c" containerName="install-certs-openstack-openstack-cell1" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.941208 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6f1eb22-77a6-4faf-a13f-a7e9d060360c" containerName="install-certs-openstack-openstack-cell1" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.942711 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.945272 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.945587 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.945617 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.945719 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.970947 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-qvzjb"] Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.977794 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.978198 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-inventory\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.978320 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrg5d\" (UniqueName: \"kubernetes.io/projected/291b7d12-e918-405a-83a3-5c3fa5733f83-kube-api-access-lrg5d\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:41 crc kubenswrapper[4900]: I1202 15:48:41.978518 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ceph\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.081280 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.081447 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-inventory\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.081487 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrg5d\" (UniqueName: \"kubernetes.io/projected/291b7d12-e918-405a-83a3-5c3fa5733f83-kube-api-access-lrg5d\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.081553 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ceph\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.093367 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ceph\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.093407 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-inventory\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.093421 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.107174 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrg5d\" (UniqueName: \"kubernetes.io/projected/291b7d12-e918-405a-83a3-5c3fa5733f83-kube-api-access-lrg5d\") pod \"ceph-client-openstack-openstack-cell1-qvzjb\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.269663 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.838487 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerStarted","Data":"e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8"} Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.860541 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-qvzjb"] Dec 02 15:48:42 crc kubenswrapper[4900]: I1202 15:48:42.862500 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nq2vs" podStartSLOduration=3.290441451 podStartE2EDuration="5.862482611s" podCreationTimestamp="2025-12-02 15:48:37 +0000 UTC" firstStartedPulling="2025-12-02 15:48:39.784327994 +0000 UTC m=+7565.200141835" lastFinishedPulling="2025-12-02 15:48:42.356369144 +0000 UTC m=+7567.772182995" observedRunningTime="2025-12-02 15:48:42.857973154 +0000 UTC m=+7568.273787055" watchObservedRunningTime="2025-12-02 15:48:42.862482611 +0000 UTC m=+7568.278296462" Dec 02 15:48:42 crc kubenswrapper[4900]: W1202 15:48:42.867203 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod291b7d12_e918_405a_83a3_5c3fa5733f83.slice/crio-fcb75a7f8bff0f307c9a8ae7365308db1351f45f18a0d6ebb821fea3d91a6227 WatchSource:0}: Error finding container fcb75a7f8bff0f307c9a8ae7365308db1351f45f18a0d6ebb821fea3d91a6227: Status 404 returned error can't find the container with id fcb75a7f8bff0f307c9a8ae7365308db1351f45f18a0d6ebb821fea3d91a6227 Dec 02 15:48:43 crc kubenswrapper[4900]: I1202 15:48:43.851970 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" event={"ID":"291b7d12-e918-405a-83a3-5c3fa5733f83","Type":"ContainerStarted","Data":"4abd7c681b7154ca445cc84fe61e23b0bb3c1c8e8821a7f1260acaaa1280bf38"} Dec 02 15:48:43 crc kubenswrapper[4900]: I1202 15:48:43.852279 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" event={"ID":"291b7d12-e918-405a-83a3-5c3fa5733f83","Type":"ContainerStarted","Data":"fcb75a7f8bff0f307c9a8ae7365308db1351f45f18a0d6ebb821fea3d91a6227"} Dec 02 15:48:43 crc kubenswrapper[4900]: I1202 15:48:43.899522 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" podStartSLOduration=2.416432753 podStartE2EDuration="2.899492671s" podCreationTimestamp="2025-12-02 15:48:41 +0000 UTC" firstStartedPulling="2025-12-02 15:48:42.870554158 +0000 UTC m=+7568.286368009" lastFinishedPulling="2025-12-02 15:48:43.353614046 +0000 UTC m=+7568.769427927" observedRunningTime="2025-12-02 15:48:43.887254056 +0000 UTC m=+7569.303067917" watchObservedRunningTime="2025-12-02 15:48:43.899492671 +0000 UTC m=+7569.315306522" Dec 02 15:48:45 crc kubenswrapper[4900]: I1202 15:48:45.117135 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:48:45 crc kubenswrapper[4900]: I1202 15:48:45.117802 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:48:48 crc kubenswrapper[4900]: I1202 15:48:48.203530 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:48 crc kubenswrapper[4900]: I1202 15:48:48.204151 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:48 crc kubenswrapper[4900]: I1202 15:48:48.263051 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:48 crc kubenswrapper[4900]: I1202 15:48:48.929752 4900 generic.go:334] "Generic (PLEG): container finished" podID="291b7d12-e918-405a-83a3-5c3fa5733f83" containerID="4abd7c681b7154ca445cc84fe61e23b0bb3c1c8e8821a7f1260acaaa1280bf38" exitCode=0 Dec 02 15:48:48 crc kubenswrapper[4900]: I1202 15:48:48.931138 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" event={"ID":"291b7d12-e918-405a-83a3-5c3fa5733f83","Type":"ContainerDied","Data":"4abd7c681b7154ca445cc84fe61e23b0bb3c1c8e8821a7f1260acaaa1280bf38"} Dec 02 15:48:48 crc kubenswrapper[4900]: I1202 15:48:48.999766 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:49 crc kubenswrapper[4900]: I1202 15:48:49.054859 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq2vs"] Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.619143 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.794400 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ceph\") pod \"291b7d12-e918-405a-83a3-5c3fa5733f83\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.794551 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrg5d\" (UniqueName: \"kubernetes.io/projected/291b7d12-e918-405a-83a3-5c3fa5733f83-kube-api-access-lrg5d\") pod \"291b7d12-e918-405a-83a3-5c3fa5733f83\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.794757 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ssh-key\") pod \"291b7d12-e918-405a-83a3-5c3fa5733f83\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.794891 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-inventory\") pod \"291b7d12-e918-405a-83a3-5c3fa5733f83\" (UID: \"291b7d12-e918-405a-83a3-5c3fa5733f83\") " Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.800855 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ceph" (OuterVolumeSpecName: "ceph") pod "291b7d12-e918-405a-83a3-5c3fa5733f83" (UID: "291b7d12-e918-405a-83a3-5c3fa5733f83"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.801256 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291b7d12-e918-405a-83a3-5c3fa5733f83-kube-api-access-lrg5d" (OuterVolumeSpecName: "kube-api-access-lrg5d") pod "291b7d12-e918-405a-83a3-5c3fa5733f83" (UID: "291b7d12-e918-405a-83a3-5c3fa5733f83"). InnerVolumeSpecName "kube-api-access-lrg5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.829617 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "291b7d12-e918-405a-83a3-5c3fa5733f83" (UID: "291b7d12-e918-405a-83a3-5c3fa5733f83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.831486 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-inventory" (OuterVolumeSpecName: "inventory") pod "291b7d12-e918-405a-83a3-5c3fa5733f83" (UID: "291b7d12-e918-405a-83a3-5c3fa5733f83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.898857 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrg5d\" (UniqueName: \"kubernetes.io/projected/291b7d12-e918-405a-83a3-5c3fa5733f83-kube-api-access-lrg5d\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.898914 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.898934 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.898952 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/291b7d12-e918-405a-83a3-5c3fa5733f83-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.953182 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.953267 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-qvzjb" event={"ID":"291b7d12-e918-405a-83a3-5c3fa5733f83","Type":"ContainerDied","Data":"fcb75a7f8bff0f307c9a8ae7365308db1351f45f18a0d6ebb821fea3d91a6227"} Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.953317 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb75a7f8bff0f307c9a8ae7365308db1351f45f18a0d6ebb821fea3d91a6227" Dec 02 15:48:50 crc kubenswrapper[4900]: I1202 15:48:50.953556 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nq2vs" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="registry-server" containerID="cri-o://e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8" gracePeriod=2 Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.121397 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-twxzg"] Dec 02 15:48:51 crc kubenswrapper[4900]: E1202 15:48:51.122368 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291b7d12-e918-405a-83a3-5c3fa5733f83" containerName="ceph-client-openstack-openstack-cell1" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.122464 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="291b7d12-e918-405a-83a3-5c3fa5733f83" containerName="ceph-client-openstack-openstack-cell1" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.122760 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="291b7d12-e918-405a-83a3-5c3fa5733f83" containerName="ceph-client-openstack-openstack-cell1" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.123820 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.126080 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.126433 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.127009 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.127520 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.127693 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.154613 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-twxzg"] Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.342594 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-inventory\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.342662 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.342694 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ssh-key\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.342713 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.342801 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tp75\" (UniqueName: \"kubernetes.io/projected/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-kube-api-access-4tp75\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.342833 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ceph\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.444676 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-inventory\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.445592 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.445619 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ssh-key\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.445635 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.445827 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tp75\" (UniqueName: \"kubernetes.io/projected/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-kube-api-access-4tp75\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.445872 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ceph\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.448541 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.453571 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-inventory\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.464690 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.469206 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ssh-key\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.470638 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ceph\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.494414 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tp75\" (UniqueName: \"kubernetes.io/projected/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-kube-api-access-4tp75\") pod \"ovn-openstack-openstack-cell1-twxzg\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.546518 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.795385 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.959469 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kvq8\" (UniqueName: \"kubernetes.io/projected/60ead230-c0f8-4966-8403-3a691c7b741e-kube-api-access-6kvq8\") pod \"60ead230-c0f8-4966-8403-3a691c7b741e\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.960250 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-utilities\") pod \"60ead230-c0f8-4966-8403-3a691c7b741e\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.960343 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-catalog-content\") pod \"60ead230-c0f8-4966-8403-3a691c7b741e\" (UID: \"60ead230-c0f8-4966-8403-3a691c7b741e\") " Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.961449 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-utilities" (OuterVolumeSpecName: "utilities") pod "60ead230-c0f8-4966-8403-3a691c7b741e" (UID: "60ead230-c0f8-4966-8403-3a691c7b741e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.964020 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.965266 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ead230-c0f8-4966-8403-3a691c7b741e-kube-api-access-6kvq8" (OuterVolumeSpecName: "kube-api-access-6kvq8") pod "60ead230-c0f8-4966-8403-3a691c7b741e" (UID: "60ead230-c0f8-4966-8403-3a691c7b741e"). InnerVolumeSpecName "kube-api-access-6kvq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.969547 4900 generic.go:334] "Generic (PLEG): container finished" podID="60ead230-c0f8-4966-8403-3a691c7b741e" containerID="e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8" exitCode=0 Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.969625 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerDied","Data":"e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8"} Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.969686 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq2vs" event={"ID":"60ead230-c0f8-4966-8403-3a691c7b741e","Type":"ContainerDied","Data":"6d7a64b5cee305d66a89f9ed9e5eb22aec4d9790d62377edaaf25a7e220f0d6f"} Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.969688 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq2vs" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.969710 4900 scope.go:117] "RemoveContainer" containerID="e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8" Dec 02 15:48:51 crc kubenswrapper[4900]: I1202 15:48:51.999016 4900 scope.go:117] "RemoveContainer" containerID="0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.011095 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60ead230-c0f8-4966-8403-3a691c7b741e" (UID: "60ead230-c0f8-4966-8403-3a691c7b741e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.027288 4900 scope.go:117] "RemoveContainer" containerID="ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.065062 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60ead230-c0f8-4966-8403-3a691c7b741e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.065102 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kvq8\" (UniqueName: \"kubernetes.io/projected/60ead230-c0f8-4966-8403-3a691c7b741e-kube-api-access-6kvq8\") on node \"crc\" DevicePath \"\"" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.072752 4900 scope.go:117] "RemoveContainer" containerID="e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8" Dec 02 15:48:52 crc kubenswrapper[4900]: E1202 15:48:52.073444 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8\": container with ID starting with e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8 not found: ID does not exist" containerID="e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.073491 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8"} err="failed to get container status \"e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8\": rpc error: code = NotFound desc = could not find container \"e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8\": container with ID starting with e33105c9af3ae0ea011e10531107594e7782a26d3fed4ef95b3257c2cb304bb8 not found: ID does not exist" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.073521 4900 scope.go:117] "RemoveContainer" containerID="0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1" Dec 02 15:48:52 crc kubenswrapper[4900]: E1202 15:48:52.074002 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1\": container with ID starting with 0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1 not found: ID does not exist" containerID="0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.074035 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1"} err="failed to get container status \"0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1\": rpc error: code = NotFound desc = could not find container \"0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1\": container with ID starting with 0df0f9df72d528c34d53846e71f242956229089938b11c5d0bf5bf91088939f1 not found: ID does not exist" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.074062 4900 scope.go:117] "RemoveContainer" containerID="ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827" Dec 02 15:48:52 crc kubenswrapper[4900]: E1202 15:48:52.074510 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827\": container with ID starting with ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827 not found: ID does not exist" containerID="ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.074546 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827"} err="failed to get container status \"ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827\": rpc error: code = NotFound desc = could not find container \"ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827\": container with ID starting with ab97415b8ca9479e2222b53091d6cd0e0a190345671908cfc59b68e79a5d0827 not found: ID does not exist" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.114595 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-twxzg"] Dec 02 15:48:52 crc kubenswrapper[4900]: W1202 15:48:52.114909 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod118b34e8_dda9_4e1c_9323_3e64eb19ac6d.slice/crio-e862a857ceb9b44ffa2febde6a91d5a68182e2322033a34ab396de24c7ff45b3 WatchSource:0}: Error finding container e862a857ceb9b44ffa2febde6a91d5a68182e2322033a34ab396de24c7ff45b3: Status 404 returned error can't find the container with id e862a857ceb9b44ffa2febde6a91d5a68182e2322033a34ab396de24c7ff45b3 Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.314032 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq2vs"] Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.324544 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nq2vs"] Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.922662 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" path="/var/lib/kubelet/pods/60ead230-c0f8-4966-8403-3a691c7b741e/volumes" Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.983133 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-twxzg" event={"ID":"118b34e8-dda9-4e1c-9323-3e64eb19ac6d","Type":"ContainerStarted","Data":"23883e614a0317f7527eeb23c0312ec6a834122f700e8416cbdea95ee60ef2b1"} Dec 02 15:48:52 crc kubenswrapper[4900]: I1202 15:48:52.983469 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-twxzg" event={"ID":"118b34e8-dda9-4e1c-9323-3e64eb19ac6d","Type":"ContainerStarted","Data":"e862a857ceb9b44ffa2febde6a91d5a68182e2322033a34ab396de24c7ff45b3"} Dec 02 15:48:53 crc kubenswrapper[4900]: I1202 15:48:53.013372 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-twxzg" podStartSLOduration=1.455272197 podStartE2EDuration="2.013348416s" podCreationTimestamp="2025-12-02 15:48:51 +0000 UTC" firstStartedPulling="2025-12-02 15:48:52.117105658 +0000 UTC m=+7577.532919519" lastFinishedPulling="2025-12-02 15:48:52.675181887 +0000 UTC m=+7578.090995738" observedRunningTime="2025-12-02 15:48:53.004859718 +0000 UTC m=+7578.420673579" watchObservedRunningTime="2025-12-02 15:48:53.013348416 +0000 UTC m=+7578.429162267" Dec 02 15:49:15 crc kubenswrapper[4900]: I1202 15:49:15.116552 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:49:15 crc kubenswrapper[4900]: I1202 15:49:15.117295 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:49:15 crc kubenswrapper[4900]: I1202 15:49:15.118052 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:49:15 crc kubenswrapper[4900]: I1202 15:49:15.118718 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8daa2c69c39f58108d67ad8d520f28a353f6ebb814b42d511c01c86f4488732"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:49:15 crc kubenswrapper[4900]: I1202 15:49:15.118788 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://c8daa2c69c39f58108d67ad8d520f28a353f6ebb814b42d511c01c86f4488732" gracePeriod=600 Dec 02 15:49:16 crc kubenswrapper[4900]: I1202 15:49:16.238831 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="c8daa2c69c39f58108d67ad8d520f28a353f6ebb814b42d511c01c86f4488732" exitCode=0 Dec 02 15:49:16 crc kubenswrapper[4900]: I1202 15:49:16.238906 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"c8daa2c69c39f58108d67ad8d520f28a353f6ebb814b42d511c01c86f4488732"} Dec 02 15:49:16 crc kubenswrapper[4900]: I1202 15:49:16.239391 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a"} Dec 02 15:49:16 crc kubenswrapper[4900]: I1202 15:49:16.239423 4900 scope.go:117] "RemoveContainer" containerID="bc1707e5e9950064a12a755bcc445bc5671ca692cca30eca7ac06a0a7d74837c" Dec 02 15:49:59 crc kubenswrapper[4900]: I1202 15:49:59.719103 4900 generic.go:334] "Generic (PLEG): container finished" podID="118b34e8-dda9-4e1c-9323-3e64eb19ac6d" containerID="23883e614a0317f7527eeb23c0312ec6a834122f700e8416cbdea95ee60ef2b1" exitCode=0 Dec 02 15:49:59 crc kubenswrapper[4900]: I1202 15:49:59.719189 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-twxzg" event={"ID":"118b34e8-dda9-4e1c-9323-3e64eb19ac6d","Type":"ContainerDied","Data":"23883e614a0317f7527eeb23c0312ec6a834122f700e8416cbdea95ee60ef2b1"} Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.199012 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.308545 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ssh-key\") pod \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.308934 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovn-combined-ca-bundle\") pod \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.308962 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovncontroller-config-0\") pod \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.309012 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-inventory\") pod \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.309188 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tp75\" (UniqueName: \"kubernetes.io/projected/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-kube-api-access-4tp75\") pod \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.309212 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ceph\") pod \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\" (UID: \"118b34e8-dda9-4e1c-9323-3e64eb19ac6d\") " Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.316130 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ceph" (OuterVolumeSpecName: "ceph") pod "118b34e8-dda9-4e1c-9323-3e64eb19ac6d" (UID: "118b34e8-dda9-4e1c-9323-3e64eb19ac6d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.316167 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "118b34e8-dda9-4e1c-9323-3e64eb19ac6d" (UID: "118b34e8-dda9-4e1c-9323-3e64eb19ac6d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.325044 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-kube-api-access-4tp75" (OuterVolumeSpecName: "kube-api-access-4tp75") pod "118b34e8-dda9-4e1c-9323-3e64eb19ac6d" (UID: "118b34e8-dda9-4e1c-9323-3e64eb19ac6d"). InnerVolumeSpecName "kube-api-access-4tp75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.339406 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "118b34e8-dda9-4e1c-9323-3e64eb19ac6d" (UID: "118b34e8-dda9-4e1c-9323-3e64eb19ac6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.344872 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-inventory" (OuterVolumeSpecName: "inventory") pod "118b34e8-dda9-4e1c-9323-3e64eb19ac6d" (UID: "118b34e8-dda9-4e1c-9323-3e64eb19ac6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.352395 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "118b34e8-dda9-4e1c-9323-3e64eb19ac6d" (UID: "118b34e8-dda9-4e1c-9323-3e64eb19ac6d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.416031 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.416072 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.416086 4900 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.416097 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.416106 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tp75\" (UniqueName: \"kubernetes.io/projected/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-kube-api-access-4tp75\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.416114 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/118b34e8-dda9-4e1c-9323-3e64eb19ac6d-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.753935 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-twxzg" event={"ID":"118b34e8-dda9-4e1c-9323-3e64eb19ac6d","Type":"ContainerDied","Data":"e862a857ceb9b44ffa2febde6a91d5a68182e2322033a34ab396de24c7ff45b3"} Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.753984 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e862a857ceb9b44ffa2febde6a91d5a68182e2322033a34ab396de24c7ff45b3" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.754047 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-twxzg" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.987106 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-7djgq"] Dec 02 15:50:01 crc kubenswrapper[4900]: E1202 15:50:01.987807 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="extract-content" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.987823 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="extract-content" Dec 02 15:50:01 crc kubenswrapper[4900]: E1202 15:50:01.987843 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118b34e8-dda9-4e1c-9323-3e64eb19ac6d" containerName="ovn-openstack-openstack-cell1" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.987849 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="118b34e8-dda9-4e1c-9323-3e64eb19ac6d" containerName="ovn-openstack-openstack-cell1" Dec 02 15:50:01 crc kubenswrapper[4900]: E1202 15:50:01.987860 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="registry-server" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.987866 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="registry-server" Dec 02 15:50:01 crc kubenswrapper[4900]: E1202 15:50:01.987896 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="extract-utilities" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.987902 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="extract-utilities" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.988092 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ead230-c0f8-4966-8403-3a691c7b741e" containerName="registry-server" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.988113 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="118b34e8-dda9-4e1c-9323-3e64eb19ac6d" containerName="ovn-openstack-openstack-cell1" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.988892 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.990530 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.991414 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.991500 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.991522 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.991534 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:50:01 crc kubenswrapper[4900]: I1202 15:50:01.991809 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.004578 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-7djgq"] Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.145983 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.146043 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.146081 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.146128 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.146183 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbp6m\" (UniqueName: \"kubernetes.io/projected/3b0426b9-6b77-438a-a6a6-b71951425f1d-kube-api-access-xbp6m\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.146438 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.146491 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249056 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbp6m\" (UniqueName: \"kubernetes.io/projected/3b0426b9-6b77-438a-a6a6-b71951425f1d-kube-api-access-xbp6m\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249182 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249214 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249279 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249327 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249368 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.249416 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.254310 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.254544 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.255511 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.259377 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.261129 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.265441 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.278974 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbp6m\" (UniqueName: \"kubernetes.io/projected/3b0426b9-6b77-438a-a6a6-b71951425f1d-kube-api-access-xbp6m\") pod \"neutron-metadata-openstack-openstack-cell1-7djgq\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.367729 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:02 crc kubenswrapper[4900]: I1202 15:50:02.946845 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-7djgq"] Dec 02 15:50:03 crc kubenswrapper[4900]: I1202 15:50:03.787153 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" event={"ID":"3b0426b9-6b77-438a-a6a6-b71951425f1d","Type":"ContainerStarted","Data":"76829a2c2398961bafcb5ac06f2efb29f8e9e61aca5f4dac222805fb0bcb689f"} Dec 02 15:50:04 crc kubenswrapper[4900]: I1202 15:50:04.800492 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" event={"ID":"3b0426b9-6b77-438a-a6a6-b71951425f1d","Type":"ContainerStarted","Data":"470cc504854678a160d474136fbfbd0e6c0fe2d3fa5dda8b382e9c24ee91a5e8"} Dec 02 15:50:04 crc kubenswrapper[4900]: I1202 15:50:04.819343 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" podStartSLOduration=2.761989235 podStartE2EDuration="3.819324048s" podCreationTimestamp="2025-12-02 15:50:01 +0000 UTC" firstStartedPulling="2025-12-02 15:50:02.961983806 +0000 UTC m=+7648.377797657" lastFinishedPulling="2025-12-02 15:50:04.019318609 +0000 UTC m=+7649.435132470" observedRunningTime="2025-12-02 15:50:04.815555572 +0000 UTC m=+7650.231369423" watchObservedRunningTime="2025-12-02 15:50:04.819324048 +0000 UTC m=+7650.235137899" Dec 02 15:50:57 crc kubenswrapper[4900]: I1202 15:50:57.401405 4900 generic.go:334] "Generic (PLEG): container finished" podID="3b0426b9-6b77-438a-a6a6-b71951425f1d" containerID="470cc504854678a160d474136fbfbd0e6c0fe2d3fa5dda8b382e9c24ee91a5e8" exitCode=0 Dec 02 15:50:57 crc kubenswrapper[4900]: I1202 15:50:57.402297 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" event={"ID":"3b0426b9-6b77-438a-a6a6-b71951425f1d","Type":"ContainerDied","Data":"470cc504854678a160d474136fbfbd0e6c0fe2d3fa5dda8b382e9c24ee91a5e8"} Dec 02 15:50:58 crc kubenswrapper[4900]: I1202 15:50:58.960134 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.102066 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-metadata-combined-ca-bundle\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.102357 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ceph\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.102524 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-nova-metadata-neutron-config-0\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.102760 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-inventory\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.102999 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbp6m\" (UniqueName: \"kubernetes.io/projected/3b0426b9-6b77-438a-a6a6-b71951425f1d-kube-api-access-xbp6m\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.103141 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ssh-key\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.103276 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3b0426b9-6b77-438a-a6a6-b71951425f1d\" (UID: \"3b0426b9-6b77-438a-a6a6-b71951425f1d\") " Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.107607 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ceph" (OuterVolumeSpecName: "ceph") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.108077 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0426b9-6b77-438a-a6a6-b71951425f1d-kube-api-access-xbp6m" (OuterVolumeSpecName: "kube-api-access-xbp6m") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "kube-api-access-xbp6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.108807 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.132278 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.133162 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-inventory" (OuterVolumeSpecName: "inventory") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.142006 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.145236 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3b0426b9-6b77-438a-a6a6-b71951425f1d" (UID: "3b0426b9-6b77-438a-a6a6-b71951425f1d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206097 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206137 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206150 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206159 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206169 4900 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206178 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b0426b9-6b77-438a-a6a6-b71951425f1d-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.206187 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbp6m\" (UniqueName: \"kubernetes.io/projected/3b0426b9-6b77-438a-a6a6-b71951425f1d-kube-api-access-xbp6m\") on node \"crc\" DevicePath \"\"" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.427965 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" event={"ID":"3b0426b9-6b77-438a-a6a6-b71951425f1d","Type":"ContainerDied","Data":"76829a2c2398961bafcb5ac06f2efb29f8e9e61aca5f4dac222805fb0bcb689f"} Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.428017 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-7djgq" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.428032 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76829a2c2398961bafcb5ac06f2efb29f8e9e61aca5f4dac222805fb0bcb689f" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.609631 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-gljkp"] Dec 02 15:50:59 crc kubenswrapper[4900]: E1202 15:50:59.610185 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0426b9-6b77-438a-a6a6-b71951425f1d" containerName="neutron-metadata-openstack-openstack-cell1" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.610209 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0426b9-6b77-438a-a6a6-b71951425f1d" containerName="neutron-metadata-openstack-openstack-cell1" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.610472 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b0426b9-6b77-438a-a6a6-b71951425f1d" containerName="neutron-metadata-openstack-openstack-cell1" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.611356 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.613757 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.614101 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.614842 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.614884 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615104 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615491 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-inventory\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615544 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bx22\" (UniqueName: \"kubernetes.io/projected/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-kube-api-access-4bx22\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615655 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ssh-key\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615682 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ceph\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615749 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.615834 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.630827 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-gljkp"] Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.717531 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.717632 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-inventory\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.717688 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bx22\" (UniqueName: \"kubernetes.io/projected/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-kube-api-access-4bx22\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.717797 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ssh-key\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.717819 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ceph\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.717900 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.721517 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-inventory\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.722213 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ceph\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.729551 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.734202 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.736788 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bx22\" (UniqueName: \"kubernetes.io/projected/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-kube-api-access-4bx22\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.737704 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ssh-key\") pod \"libvirt-openstack-openstack-cell1-gljkp\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:50:59 crc kubenswrapper[4900]: I1202 15:50:59.935243 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:51:00 crc kubenswrapper[4900]: I1202 15:51:00.565297 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-gljkp"] Dec 02 15:51:01 crc kubenswrapper[4900]: I1202 15:51:01.459773 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" event={"ID":"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5","Type":"ContainerStarted","Data":"96820289bd96931d82232c66681eb255483934e0cba906c06e02e3887055938f"} Dec 02 15:51:01 crc kubenswrapper[4900]: I1202 15:51:01.460389 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" event={"ID":"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5","Type":"ContainerStarted","Data":"0cea54b0da2886082a52cb9526e33835bd55a7eca85e59209bcbb9ad1c62fec6"} Dec 02 15:51:01 crc kubenswrapper[4900]: I1202 15:51:01.478787 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" podStartSLOduration=2.020222476 podStartE2EDuration="2.478758444s" podCreationTimestamp="2025-12-02 15:50:59 +0000 UTC" firstStartedPulling="2025-12-02 15:51:00.578247865 +0000 UTC m=+7705.994061716" lastFinishedPulling="2025-12-02 15:51:01.036783833 +0000 UTC m=+7706.452597684" observedRunningTime="2025-12-02 15:51:01.476814419 +0000 UTC m=+7706.892628270" watchObservedRunningTime="2025-12-02 15:51:01.478758444 +0000 UTC m=+7706.894572295" Dec 02 15:51:15 crc kubenswrapper[4900]: I1202 15:51:15.116899 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:51:15 crc kubenswrapper[4900]: I1202 15:51:15.117384 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:51:45 crc kubenswrapper[4900]: I1202 15:51:45.116876 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:51:45 crc kubenswrapper[4900]: I1202 15:51:45.117546 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.116188 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.116922 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.117024 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.117846 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.117892 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" gracePeriod=600 Dec 02 15:52:15 crc kubenswrapper[4900]: E1202 15:52:15.245360 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.334999 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" exitCode=0 Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.335076 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a"} Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.335345 4900 scope.go:117] "RemoveContainer" containerID="c8daa2c69c39f58108d67ad8d520f28a353f6ebb814b42d511c01c86f4488732" Dec 02 15:52:15 crc kubenswrapper[4900]: I1202 15:52:15.336110 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:52:15 crc kubenswrapper[4900]: E1202 15:52:15.336514 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:52:27 crc kubenswrapper[4900]: I1202 15:52:27.910098 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:52:27 crc kubenswrapper[4900]: E1202 15:52:27.911092 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:52:38 crc kubenswrapper[4900]: I1202 15:52:38.910347 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:52:38 crc kubenswrapper[4900]: E1202 15:52:38.911339 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:52:52 crc kubenswrapper[4900]: I1202 15:52:52.910862 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:52:52 crc kubenswrapper[4900]: E1202 15:52:52.911793 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:53:06 crc kubenswrapper[4900]: I1202 15:53:06.910905 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:53:06 crc kubenswrapper[4900]: E1202 15:53:06.912471 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:53:17 crc kubenswrapper[4900]: I1202 15:53:17.909855 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:53:17 crc kubenswrapper[4900]: E1202 15:53:17.910580 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:53:30 crc kubenswrapper[4900]: I1202 15:53:30.910508 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:53:30 crc kubenswrapper[4900]: E1202 15:53:30.911379 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:53:45 crc kubenswrapper[4900]: I1202 15:53:45.911099 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:53:45 crc kubenswrapper[4900]: E1202 15:53:45.912493 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:53:58 crc kubenswrapper[4900]: I1202 15:53:58.911068 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:53:58 crc kubenswrapper[4900]: E1202 15:53:58.912149 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:54:09 crc kubenswrapper[4900]: I1202 15:54:09.910991 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:54:09 crc kubenswrapper[4900]: E1202 15:54:09.912532 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:54:23 crc kubenswrapper[4900]: I1202 15:54:23.911089 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:54:23 crc kubenswrapper[4900]: E1202 15:54:23.913385 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:54:34 crc kubenswrapper[4900]: I1202 15:54:34.921467 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:54:34 crc kubenswrapper[4900]: E1202 15:54:34.923968 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.735505 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbcrt"] Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.738561 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.752280 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbcrt"] Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.817526 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwh9\" (UniqueName: \"kubernetes.io/projected/22916595-1e47-4c62-aa78-7ac029e44c05-kube-api-access-ljwh9\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.817666 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-utilities\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.817692 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-catalog-content\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.919237 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwh9\" (UniqueName: \"kubernetes.io/projected/22916595-1e47-4c62-aa78-7ac029e44c05-kube-api-access-ljwh9\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.919358 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-utilities\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.919385 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-catalog-content\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.919859 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-utilities\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.919947 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-catalog-content\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:47 crc kubenswrapper[4900]: I1202 15:54:47.939800 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwh9\" (UniqueName: \"kubernetes.io/projected/22916595-1e47-4c62-aa78-7ac029e44c05-kube-api-access-ljwh9\") pod \"redhat-marketplace-gbcrt\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:48 crc kubenswrapper[4900]: I1202 15:54:48.059817 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:48 crc kubenswrapper[4900]: W1202 15:54:48.607988 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22916595_1e47_4c62_aa78_7ac029e44c05.slice/crio-a6c56fc4bbfeabeff44e8c33290d4e7b4785b8d99292d9373fdf68e4a9b752eb WatchSource:0}: Error finding container a6c56fc4bbfeabeff44e8c33290d4e7b4785b8d99292d9373fdf68e4a9b752eb: Status 404 returned error can't find the container with id a6c56fc4bbfeabeff44e8c33290d4e7b4785b8d99292d9373fdf68e4a9b752eb Dec 02 15:54:48 crc kubenswrapper[4900]: I1202 15:54:48.609601 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbcrt"] Dec 02 15:54:48 crc kubenswrapper[4900]: I1202 15:54:48.910586 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:54:48 crc kubenswrapper[4900]: E1202 15:54:48.911991 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:54:49 crc kubenswrapper[4900]: I1202 15:54:49.033659 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbcrt" event={"ID":"22916595-1e47-4c62-aa78-7ac029e44c05","Type":"ContainerStarted","Data":"a6c56fc4bbfeabeff44e8c33290d4e7b4785b8d99292d9373fdf68e4a9b752eb"} Dec 02 15:54:50 crc kubenswrapper[4900]: I1202 15:54:50.048345 4900 generic.go:334] "Generic (PLEG): container finished" podID="22916595-1e47-4c62-aa78-7ac029e44c05" containerID="6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50" exitCode=0 Dec 02 15:54:50 crc kubenswrapper[4900]: I1202 15:54:50.048416 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbcrt" event={"ID":"22916595-1e47-4c62-aa78-7ac029e44c05","Type":"ContainerDied","Data":"6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50"} Dec 02 15:54:50 crc kubenswrapper[4900]: I1202 15:54:50.052803 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:54:53 crc kubenswrapper[4900]: I1202 15:54:53.086189 4900 generic.go:334] "Generic (PLEG): container finished" podID="22916595-1e47-4c62-aa78-7ac029e44c05" containerID="837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7" exitCode=0 Dec 02 15:54:53 crc kubenswrapper[4900]: I1202 15:54:53.086310 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbcrt" event={"ID":"22916595-1e47-4c62-aa78-7ac029e44c05","Type":"ContainerDied","Data":"837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7"} Dec 02 15:54:56 crc kubenswrapper[4900]: I1202 15:54:56.125108 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbcrt" event={"ID":"22916595-1e47-4c62-aa78-7ac029e44c05","Type":"ContainerStarted","Data":"61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61"} Dec 02 15:54:56 crc kubenswrapper[4900]: I1202 15:54:56.160614 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbcrt" podStartSLOduration=4.538015909 podStartE2EDuration="9.16059165s" podCreationTimestamp="2025-12-02 15:54:47 +0000 UTC" firstStartedPulling="2025-12-02 15:54:50.052393801 +0000 UTC m=+7935.468207662" lastFinishedPulling="2025-12-02 15:54:54.674969552 +0000 UTC m=+7940.090783403" observedRunningTime="2025-12-02 15:54:56.146264497 +0000 UTC m=+7941.562078388" watchObservedRunningTime="2025-12-02 15:54:56.16059165 +0000 UTC m=+7941.576405511" Dec 02 15:54:58 crc kubenswrapper[4900]: I1202 15:54:58.060147 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:58 crc kubenswrapper[4900]: I1202 15:54:58.060482 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:54:58 crc kubenswrapper[4900]: I1202 15:54:58.114635 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:55:02 crc kubenswrapper[4900]: I1202 15:55:02.911013 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:55:02 crc kubenswrapper[4900]: E1202 15:55:02.912519 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.120323 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.180060 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbcrt"] Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.252582 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gbcrt" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="registry-server" containerID="cri-o://61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61" gracePeriod=2 Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.777159 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.900237 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-utilities\") pod \"22916595-1e47-4c62-aa78-7ac029e44c05\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.900466 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-catalog-content\") pod \"22916595-1e47-4c62-aa78-7ac029e44c05\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.900607 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwh9\" (UniqueName: \"kubernetes.io/projected/22916595-1e47-4c62-aa78-7ac029e44c05-kube-api-access-ljwh9\") pod \"22916595-1e47-4c62-aa78-7ac029e44c05\" (UID: \"22916595-1e47-4c62-aa78-7ac029e44c05\") " Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.901237 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-utilities" (OuterVolumeSpecName: "utilities") pod "22916595-1e47-4c62-aa78-7ac029e44c05" (UID: "22916595-1e47-4c62-aa78-7ac029e44c05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.918014 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22916595-1e47-4c62-aa78-7ac029e44c05" (UID: "22916595-1e47-4c62-aa78-7ac029e44c05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:55:08 crc kubenswrapper[4900]: I1202 15:55:08.918447 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22916595-1e47-4c62-aa78-7ac029e44c05-kube-api-access-ljwh9" (OuterVolumeSpecName: "kube-api-access-ljwh9") pod "22916595-1e47-4c62-aa78-7ac029e44c05" (UID: "22916595-1e47-4c62-aa78-7ac029e44c05"). InnerVolumeSpecName "kube-api-access-ljwh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.003680 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwh9\" (UniqueName: \"kubernetes.io/projected/22916595-1e47-4c62-aa78-7ac029e44c05-kube-api-access-ljwh9\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.003718 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.003754 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22916595-1e47-4c62-aa78-7ac029e44c05-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.263988 4900 generic.go:334] "Generic (PLEG): container finished" podID="22916595-1e47-4c62-aa78-7ac029e44c05" containerID="61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61" exitCode=0 Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.264030 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbcrt" event={"ID":"22916595-1e47-4c62-aa78-7ac029e44c05","Type":"ContainerDied","Data":"61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61"} Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.264061 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbcrt" event={"ID":"22916595-1e47-4c62-aa78-7ac029e44c05","Type":"ContainerDied","Data":"a6c56fc4bbfeabeff44e8c33290d4e7b4785b8d99292d9373fdf68e4a9b752eb"} Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.264077 4900 scope.go:117] "RemoveContainer" containerID="61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.264192 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbcrt" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.292415 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbcrt"] Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.294470 4900 scope.go:117] "RemoveContainer" containerID="837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.315489 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbcrt"] Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.316151 4900 scope.go:117] "RemoveContainer" containerID="6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.367834 4900 scope.go:117] "RemoveContainer" containerID="61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61" Dec 02 15:55:09 crc kubenswrapper[4900]: E1202 15:55:09.368338 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61\": container with ID starting with 61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61 not found: ID does not exist" containerID="61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.368384 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61"} err="failed to get container status \"61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61\": rpc error: code = NotFound desc = could not find container \"61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61\": container with ID starting with 61d2dc44fb443dfb1a0d7f813bd7ff45dee485346ebe63aa75e1c7842e5a4d61 not found: ID does not exist" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.368412 4900 scope.go:117] "RemoveContainer" containerID="837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7" Dec 02 15:55:09 crc kubenswrapper[4900]: E1202 15:55:09.368751 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7\": container with ID starting with 837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7 not found: ID does not exist" containerID="837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.368777 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7"} err="failed to get container status \"837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7\": rpc error: code = NotFound desc = could not find container \"837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7\": container with ID starting with 837c468f915977e0f6f6948b9c4848ae530627fdabdac80664a7f899ee1e90b7 not found: ID does not exist" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.368793 4900 scope.go:117] "RemoveContainer" containerID="6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50" Dec 02 15:55:09 crc kubenswrapper[4900]: E1202 15:55:09.369145 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50\": container with ID starting with 6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50 not found: ID does not exist" containerID="6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50" Dec 02 15:55:09 crc kubenswrapper[4900]: I1202 15:55:09.369196 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50"} err="failed to get container status \"6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50\": rpc error: code = NotFound desc = could not find container \"6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50\": container with ID starting with 6db546aefc1f678326fad4b5c792f590aa4ae45534dad5b8eae12360f5ed7c50 not found: ID does not exist" Dec 02 15:55:10 crc kubenswrapper[4900]: I1202 15:55:10.922327 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" path="/var/lib/kubelet/pods/22916595-1e47-4c62-aa78-7ac029e44c05/volumes" Dec 02 15:55:16 crc kubenswrapper[4900]: I1202 15:55:16.911085 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:55:16 crc kubenswrapper[4900]: E1202 15:55:16.911907 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:55:29 crc kubenswrapper[4900]: I1202 15:55:29.910238 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:55:29 crc kubenswrapper[4900]: E1202 15:55:29.911087 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:55:44 crc kubenswrapper[4900]: I1202 15:55:44.661962 4900 generic.go:334] "Generic (PLEG): container finished" podID="1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" containerID="96820289bd96931d82232c66681eb255483934e0cba906c06e02e3887055938f" exitCode=0 Dec 02 15:55:44 crc kubenswrapper[4900]: I1202 15:55:44.662101 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" event={"ID":"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5","Type":"ContainerDied","Data":"96820289bd96931d82232c66681eb255483934e0cba906c06e02e3887055938f"} Dec 02 15:55:44 crc kubenswrapper[4900]: I1202 15:55:44.919994 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:55:44 crc kubenswrapper[4900]: E1202 15:55:44.920380 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.231146 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.360489 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ceph\") pod \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.360587 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-secret-0\") pod \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.360614 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bx22\" (UniqueName: \"kubernetes.io/projected/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-kube-api-access-4bx22\") pod \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.360680 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-inventory\") pod \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.360750 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-combined-ca-bundle\") pod \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.360810 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ssh-key\") pod \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\" (UID: \"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5\") " Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.367684 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" (UID: "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.370595 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ceph" (OuterVolumeSpecName: "ceph") pod "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" (UID: "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.370735 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-kube-api-access-4bx22" (OuterVolumeSpecName: "kube-api-access-4bx22") pod "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" (UID: "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5"). InnerVolumeSpecName "kube-api-access-4bx22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.391006 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-inventory" (OuterVolumeSpecName: "inventory") pod "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" (UID: "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.401664 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" (UID: "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.430604 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" (UID: "1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.464417 4900 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.464459 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bx22\" (UniqueName: \"kubernetes.io/projected/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-kube-api-access-4bx22\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.464475 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.464488 4900 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.464501 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.464512 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.687871 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" event={"ID":"1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5","Type":"ContainerDied","Data":"0cea54b0da2886082a52cb9526e33835bd55a7eca85e59209bcbb9ad1c62fec6"} Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.687912 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cea54b0da2886082a52cb9526e33835bd55a7eca85e59209bcbb9ad1c62fec6" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.688677 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-gljkp" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.790391 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-prqhc"] Dec 02 15:55:46 crc kubenswrapper[4900]: E1202 15:55:46.790825 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="registry-server" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.790845 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="registry-server" Dec 02 15:55:46 crc kubenswrapper[4900]: E1202 15:55:46.790877 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="extract-content" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.790884 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="extract-content" Dec 02 15:55:46 crc kubenswrapper[4900]: E1202 15:55:46.790900 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" containerName="libvirt-openstack-openstack-cell1" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.790907 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" containerName="libvirt-openstack-openstack-cell1" Dec 02 15:55:46 crc kubenswrapper[4900]: E1202 15:55:46.790928 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="extract-utilities" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.790933 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="extract-utilities" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.791152 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5" containerName="libvirt-openstack-openstack-cell1" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.791165 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="22916595-1e47-4c62-aa78-7ac029e44c05" containerName="registry-server" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.791891 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.794020 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.797402 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.797612 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.798149 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.798396 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.800827 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.807776 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.851621 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-prqhc"] Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973472 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973540 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ceph\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973582 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-inventory\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973654 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973696 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973735 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973753 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973788 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973832 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973867 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzq4\" (UniqueName: \"kubernetes.io/projected/c26b8eb5-c400-47f5-ae09-765101884ea4-kube-api-access-nmzq4\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:46 crc kubenswrapper[4900]: I1202 15:55:46.973894 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.075720 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-inventory\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.075826 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.075904 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076028 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076067 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076136 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076225 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076298 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzq4\" (UniqueName: \"kubernetes.io/projected/c26b8eb5-c400-47f5-ae09-765101884ea4-kube-api-access-nmzq4\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076341 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076395 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.076424 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ceph\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.078021 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.078049 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.088732 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ceph\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.088983 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.089174 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-inventory\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.089225 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.089770 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.089826 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.089932 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.090131 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.105976 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzq4\" (UniqueName: \"kubernetes.io/projected/c26b8eb5-c400-47f5-ae09-765101884ea4-kube-api-access-nmzq4\") pod \"nova-cell1-openstack-openstack-cell1-prqhc\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.118333 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.664828 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-prqhc"] Dec 02 15:55:47 crc kubenswrapper[4900]: I1202 15:55:47.698743 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" event={"ID":"c26b8eb5-c400-47f5-ae09-765101884ea4","Type":"ContainerStarted","Data":"0f4dfec28c9980318511b663db7dda3a4234ded36233796a86affaf1cbfdda7e"} Dec 02 15:55:48 crc kubenswrapper[4900]: I1202 15:55:48.714129 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" event={"ID":"c26b8eb5-c400-47f5-ae09-765101884ea4","Type":"ContainerStarted","Data":"4129eee7dfb7a6d005bd295e313ebac47eb34e2d2914709713e038825d13818f"} Dec 02 15:55:48 crc kubenswrapper[4900]: I1202 15:55:48.738950 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" podStartSLOduration=2.035329219 podStartE2EDuration="2.738926955s" podCreationTimestamp="2025-12-02 15:55:46 +0000 UTC" firstStartedPulling="2025-12-02 15:55:47.663838902 +0000 UTC m=+7993.079652753" lastFinishedPulling="2025-12-02 15:55:48.367436598 +0000 UTC m=+7993.783250489" observedRunningTime="2025-12-02 15:55:48.734773128 +0000 UTC m=+7994.150586989" watchObservedRunningTime="2025-12-02 15:55:48.738926955 +0000 UTC m=+7994.154740826" Dec 02 15:55:58 crc kubenswrapper[4900]: I1202 15:55:58.911077 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:55:58 crc kubenswrapper[4900]: E1202 15:55:58.912373 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:56:11 crc kubenswrapper[4900]: I1202 15:56:11.911016 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:56:11 crc kubenswrapper[4900]: E1202 15:56:11.911895 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:56:23 crc kubenswrapper[4900]: I1202 15:56:23.911036 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:56:23 crc kubenswrapper[4900]: E1202 15:56:23.911869 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:56:38 crc kubenswrapper[4900]: I1202 15:56:38.910734 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:56:38 crc kubenswrapper[4900]: E1202 15:56:38.911814 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:56:53 crc kubenswrapper[4900]: I1202 15:56:53.910484 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:56:53 crc kubenswrapper[4900]: E1202 15:56:53.911219 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:57:08 crc kubenswrapper[4900]: I1202 15:57:08.910554 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:57:08 crc kubenswrapper[4900]: E1202 15:57:08.911324 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 15:57:22 crc kubenswrapper[4900]: I1202 15:57:22.910843 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 15:57:23 crc kubenswrapper[4900]: I1202 15:57:23.802351 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"e7e9b4667adc2d9a1e9f8bc11fceab02ccb34f30d29faab5adf246cc2018145e"} Dec 02 15:57:32 crc kubenswrapper[4900]: I1202 15:57:32.936172 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m2j85"] Dec 02 15:57:32 crc kubenswrapper[4900]: I1202 15:57:32.939535 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:32 crc kubenswrapper[4900]: I1202 15:57:32.951125 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m2j85"] Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.070488 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-utilities\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.071163 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-catalog-content\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.071226 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4svk\" (UniqueName: \"kubernetes.io/projected/07a19bc1-77c5-42de-af14-a27209f0a473-kube-api-access-s4svk\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.172691 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-catalog-content\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.172766 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4svk\" (UniqueName: \"kubernetes.io/projected/07a19bc1-77c5-42de-af14-a27209f0a473-kube-api-access-s4svk\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.172786 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-utilities\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.173229 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-utilities\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.173447 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-catalog-content\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.196668 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4svk\" (UniqueName: \"kubernetes.io/projected/07a19bc1-77c5-42de-af14-a27209f0a473-kube-api-access-s4svk\") pod \"community-operators-m2j85\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.274805 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.876336 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m2j85"] Dec 02 15:57:33 crc kubenswrapper[4900]: I1202 15:57:33.903232 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerStarted","Data":"0c1f2cb7327c9a9fdcc8f4d61a0271d9b0509f9f501fce733061d5255923e146"} Dec 02 15:57:34 crc kubenswrapper[4900]: I1202 15:57:34.915471 4900 generic.go:334] "Generic (PLEG): container finished" podID="07a19bc1-77c5-42de-af14-a27209f0a473" containerID="48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252" exitCode=0 Dec 02 15:57:34 crc kubenswrapper[4900]: I1202 15:57:34.944471 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerDied","Data":"48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252"} Dec 02 15:57:36 crc kubenswrapper[4900]: I1202 15:57:36.937200 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerStarted","Data":"96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d"} Dec 02 15:57:37 crc kubenswrapper[4900]: I1202 15:57:37.949899 4900 generic.go:334] "Generic (PLEG): container finished" podID="07a19bc1-77c5-42de-af14-a27209f0a473" containerID="96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d" exitCode=0 Dec 02 15:57:37 crc kubenswrapper[4900]: I1202 15:57:37.950101 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerDied","Data":"96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d"} Dec 02 15:57:39 crc kubenswrapper[4900]: I1202 15:57:39.969698 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerStarted","Data":"f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767"} Dec 02 15:57:39 crc kubenswrapper[4900]: I1202 15:57:39.995062 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m2j85" podStartSLOduration=4.15803271 podStartE2EDuration="7.995042018s" podCreationTimestamp="2025-12-02 15:57:32 +0000 UTC" firstStartedPulling="2025-12-02 15:57:34.937893984 +0000 UTC m=+8100.353707855" lastFinishedPulling="2025-12-02 15:57:38.774903292 +0000 UTC m=+8104.190717163" observedRunningTime="2025-12-02 15:57:39.990940582 +0000 UTC m=+8105.406754473" watchObservedRunningTime="2025-12-02 15:57:39.995042018 +0000 UTC m=+8105.410855869" Dec 02 15:57:43 crc kubenswrapper[4900]: I1202 15:57:43.275893 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:43 crc kubenswrapper[4900]: I1202 15:57:43.276488 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:43 crc kubenswrapper[4900]: I1202 15:57:43.334162 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:44 crc kubenswrapper[4900]: I1202 15:57:44.094847 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:44 crc kubenswrapper[4900]: I1202 15:57:44.140996 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m2j85"] Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.031507 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m2j85" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="registry-server" containerID="cri-o://f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767" gracePeriod=2 Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.585187 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.759242 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-utilities\") pod \"07a19bc1-77c5-42de-af14-a27209f0a473\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.759373 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-catalog-content\") pod \"07a19bc1-77c5-42de-af14-a27209f0a473\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.759450 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4svk\" (UniqueName: \"kubernetes.io/projected/07a19bc1-77c5-42de-af14-a27209f0a473-kube-api-access-s4svk\") pod \"07a19bc1-77c5-42de-af14-a27209f0a473\" (UID: \"07a19bc1-77c5-42de-af14-a27209f0a473\") " Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.760269 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-utilities" (OuterVolumeSpecName: "utilities") pod "07a19bc1-77c5-42de-af14-a27209f0a473" (UID: "07a19bc1-77c5-42de-af14-a27209f0a473"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.765006 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a19bc1-77c5-42de-af14-a27209f0a473-kube-api-access-s4svk" (OuterVolumeSpecName: "kube-api-access-s4svk") pod "07a19bc1-77c5-42de-af14-a27209f0a473" (UID: "07a19bc1-77c5-42de-af14-a27209f0a473"). InnerVolumeSpecName "kube-api-access-s4svk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.805343 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07a19bc1-77c5-42de-af14-a27209f0a473" (UID: "07a19bc1-77c5-42de-af14-a27209f0a473"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.861594 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4svk\" (UniqueName: \"kubernetes.io/projected/07a19bc1-77c5-42de-af14-a27209f0a473-kube-api-access-s4svk\") on node \"crc\" DevicePath \"\"" Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.861657 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:57:46 crc kubenswrapper[4900]: I1202 15:57:46.861670 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a19bc1-77c5-42de-af14-a27209f0a473-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.043722 4900 generic.go:334] "Generic (PLEG): container finished" podID="07a19bc1-77c5-42de-af14-a27209f0a473" containerID="f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767" exitCode=0 Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.043767 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerDied","Data":"f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767"} Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.043791 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m2j85" event={"ID":"07a19bc1-77c5-42de-af14-a27209f0a473","Type":"ContainerDied","Data":"0c1f2cb7327c9a9fdcc8f4d61a0271d9b0509f9f501fce733061d5255923e146"} Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.043807 4900 scope.go:117] "RemoveContainer" containerID="f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.043929 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m2j85" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.080497 4900 scope.go:117] "RemoveContainer" containerID="96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.085107 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m2j85"] Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.095939 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m2j85"] Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.117841 4900 scope.go:117] "RemoveContainer" containerID="48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.162245 4900 scope.go:117] "RemoveContainer" containerID="f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767" Dec 02 15:57:47 crc kubenswrapper[4900]: E1202 15:57:47.163348 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767\": container with ID starting with f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767 not found: ID does not exist" containerID="f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.163392 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767"} err="failed to get container status \"f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767\": rpc error: code = NotFound desc = could not find container \"f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767\": container with ID starting with f1ac2c02faecc12aba56b420f1a63059b3930391dbd059c60a9d4df13c694767 not found: ID does not exist" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.163417 4900 scope.go:117] "RemoveContainer" containerID="96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d" Dec 02 15:57:47 crc kubenswrapper[4900]: E1202 15:57:47.170281 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d\": container with ID starting with 96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d not found: ID does not exist" containerID="96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.171665 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d"} err="failed to get container status \"96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d\": rpc error: code = NotFound desc = could not find container \"96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d\": container with ID starting with 96cbd3e5eae60d5033f6ae4fca541f047b281b8acdb108694736d71f1a1f278d not found: ID does not exist" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.171709 4900 scope.go:117] "RemoveContainer" containerID="48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252" Dec 02 15:57:47 crc kubenswrapper[4900]: E1202 15:57:47.172276 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252\": container with ID starting with 48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252 not found: ID does not exist" containerID="48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252" Dec 02 15:57:47 crc kubenswrapper[4900]: I1202 15:57:47.172303 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252"} err="failed to get container status \"48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252\": rpc error: code = NotFound desc = could not find container \"48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252\": container with ID starting with 48faf2ea57aa690f99f1c61f4f6b3fc59521cc2d8d886ff52812d4fd0f67b252 not found: ID does not exist" Dec 02 15:57:48 crc kubenswrapper[4900]: I1202 15:57:48.926817 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" path="/var/lib/kubelet/pods/07a19bc1-77c5-42de-af14-a27209f0a473/volumes" Dec 02 15:58:37 crc kubenswrapper[4900]: I1202 15:58:35.867889 4900 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-87drk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 15:58:37 crc kubenswrapper[4900]: I1202 15:58:35.869054 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" podUID="c782003f-e8d3-4aa5-aba6-0db2706d4e43" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 15:58:37 crc kubenswrapper[4900]: I1202 15:58:35.882909 4900 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-87drk container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 15:58:37 crc kubenswrapper[4900]: I1202 15:58:35.882987 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-87drk" podUID="c782003f-e8d3-4aa5-aba6-0db2706d4e43" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 15:58:37 crc kubenswrapper[4900]: I1202 15:58:36.754020 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="96bd872d-1991-4668-b075-19f9673dccd4" containerName="galera" probeResult="failure" output="command timed out" Dec 02 15:58:37 crc kubenswrapper[4900]: I1202 15:58:37.219609 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5c5c9bf76c-rxvh2" podUID="bf1a22ef-b575-4d5c-b109-3ec72f7eb657" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8080/readyz\": dial tcp 10.217.0.47:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.746250 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vvsp2"] Dec 02 15:58:44 crc kubenswrapper[4900]: E1202 15:58:44.748483 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="extract-utilities" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.748508 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="extract-utilities" Dec 02 15:58:44 crc kubenswrapper[4900]: E1202 15:58:44.748520 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="registry-server" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.748528 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="registry-server" Dec 02 15:58:44 crc kubenswrapper[4900]: E1202 15:58:44.748547 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="extract-content" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.748554 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="extract-content" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.748808 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a19bc1-77c5-42de-af14-a27209f0a473" containerName="registry-server" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.750567 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.766149 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvsp2"] Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.814625 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-utilities\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.814797 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jswtx\" (UniqueName: \"kubernetes.io/projected/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-kube-api-access-jswtx\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.814916 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-catalog-content\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.916245 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jswtx\" (UniqueName: \"kubernetes.io/projected/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-kube-api-access-jswtx\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.916376 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-catalog-content\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.916461 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-utilities\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.917145 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-catalog-content\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.917361 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-utilities\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:44 crc kubenswrapper[4900]: I1202 15:58:44.938492 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jswtx\" (UniqueName: \"kubernetes.io/projected/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-kube-api-access-jswtx\") pod \"certified-operators-vvsp2\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:45 crc kubenswrapper[4900]: I1202 15:58:45.105462 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:45 crc kubenswrapper[4900]: I1202 15:58:45.721145 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvsp2"] Dec 02 15:58:46 crc kubenswrapper[4900]: I1202 15:58:46.403459 4900 generic.go:334] "Generic (PLEG): container finished" podID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerID="ab43552c59a0dfb6625ca51d49b714e92b6cef8fe33ebd28153d45efc4df137c" exitCode=0 Dec 02 15:58:46 crc kubenswrapper[4900]: I1202 15:58:46.403544 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvsp2" event={"ID":"993bd8bd-db80-4c6a-8f4f-e57a55a2c581","Type":"ContainerDied","Data":"ab43552c59a0dfb6625ca51d49b714e92b6cef8fe33ebd28153d45efc4df137c"} Dec 02 15:58:46 crc kubenswrapper[4900]: I1202 15:58:46.403633 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvsp2" event={"ID":"993bd8bd-db80-4c6a-8f4f-e57a55a2c581","Type":"ContainerStarted","Data":"d8205653f78823c073d40e0e3a07322d2337b513ae750fe0bbafa6611898e17b"} Dec 02 15:58:48 crc kubenswrapper[4900]: I1202 15:58:48.427544 4900 generic.go:334] "Generic (PLEG): container finished" podID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerID="3b9ae331daba5093b1f501570323dee1daae93fa6a645a070d1802bc189b20c4" exitCode=0 Dec 02 15:58:48 crc kubenswrapper[4900]: I1202 15:58:48.427980 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvsp2" event={"ID":"993bd8bd-db80-4c6a-8f4f-e57a55a2c581","Type":"ContainerDied","Data":"3b9ae331daba5093b1f501570323dee1daae93fa6a645a070d1802bc189b20c4"} Dec 02 15:58:49 crc kubenswrapper[4900]: I1202 15:58:49.441954 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvsp2" event={"ID":"993bd8bd-db80-4c6a-8f4f-e57a55a2c581","Type":"ContainerStarted","Data":"573f232d28e7a5a85fc31c63fa12b0400a5c2db1ac3fdb50450f7b7ad103ffb1"} Dec 02 15:58:49 crc kubenswrapper[4900]: I1202 15:58:49.476788 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vvsp2" podStartSLOduration=2.829335772 podStartE2EDuration="5.476767711s" podCreationTimestamp="2025-12-02 15:58:44 +0000 UTC" firstStartedPulling="2025-12-02 15:58:46.405731938 +0000 UTC m=+8171.821545829" lastFinishedPulling="2025-12-02 15:58:49.053163917 +0000 UTC m=+8174.468977768" observedRunningTime="2025-12-02 15:58:49.464263007 +0000 UTC m=+8174.880076858" watchObservedRunningTime="2025-12-02 15:58:49.476767711 +0000 UTC m=+8174.892581572" Dec 02 15:58:55 crc kubenswrapper[4900]: I1202 15:58:55.105946 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:55 crc kubenswrapper[4900]: I1202 15:58:55.107815 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:55 crc kubenswrapper[4900]: I1202 15:58:55.155849 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:55 crc kubenswrapper[4900]: I1202 15:58:55.580105 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:55 crc kubenswrapper[4900]: I1202 15:58:55.633508 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvsp2"] Dec 02 15:58:57 crc kubenswrapper[4900]: I1202 15:58:57.566621 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vvsp2" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="registry-server" containerID="cri-o://573f232d28e7a5a85fc31c63fa12b0400a5c2db1ac3fdb50450f7b7ad103ffb1" gracePeriod=2 Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.585205 4900 generic.go:334] "Generic (PLEG): container finished" podID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerID="573f232d28e7a5a85fc31c63fa12b0400a5c2db1ac3fdb50450f7b7ad103ffb1" exitCode=0 Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.585803 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvsp2" event={"ID":"993bd8bd-db80-4c6a-8f4f-e57a55a2c581","Type":"ContainerDied","Data":"573f232d28e7a5a85fc31c63fa12b0400a5c2db1ac3fdb50450f7b7ad103ffb1"} Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.585844 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvsp2" event={"ID":"993bd8bd-db80-4c6a-8f4f-e57a55a2c581","Type":"ContainerDied","Data":"d8205653f78823c073d40e0e3a07322d2337b513ae750fe0bbafa6611898e17b"} Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.585857 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8205653f78823c073d40e0e3a07322d2337b513ae750fe0bbafa6611898e17b" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.629541 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.749086 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-utilities\") pod \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.749499 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-catalog-content\") pod \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.749533 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jswtx\" (UniqueName: \"kubernetes.io/projected/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-kube-api-access-jswtx\") pod \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\" (UID: \"993bd8bd-db80-4c6a-8f4f-e57a55a2c581\") " Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.749972 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-utilities" (OuterVolumeSpecName: "utilities") pod "993bd8bd-db80-4c6a-8f4f-e57a55a2c581" (UID: "993bd8bd-db80-4c6a-8f4f-e57a55a2c581"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.750298 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.755742 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-kube-api-access-jswtx" (OuterVolumeSpecName: "kube-api-access-jswtx") pod "993bd8bd-db80-4c6a-8f4f-e57a55a2c581" (UID: "993bd8bd-db80-4c6a-8f4f-e57a55a2c581"). InnerVolumeSpecName "kube-api-access-jswtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.804368 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "993bd8bd-db80-4c6a-8f4f-e57a55a2c581" (UID: "993bd8bd-db80-4c6a-8f4f-e57a55a2c581"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.851600 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:58:58 crc kubenswrapper[4900]: I1202 15:58:58.851632 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jswtx\" (UniqueName: \"kubernetes.io/projected/993bd8bd-db80-4c6a-8f4f-e57a55a2c581-kube-api-access-jswtx\") on node \"crc\" DevicePath \"\"" Dec 02 15:58:59 crc kubenswrapper[4900]: I1202 15:58:59.598732 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvsp2" Dec 02 15:58:59 crc kubenswrapper[4900]: I1202 15:58:59.637187 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvsp2"] Dec 02 15:58:59 crc kubenswrapper[4900]: I1202 15:58:59.646758 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vvsp2"] Dec 02 15:59:00 crc kubenswrapper[4900]: I1202 15:59:00.936055 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" path="/var/lib/kubelet/pods/993bd8bd-db80-4c6a-8f4f-e57a55a2c581/volumes" Dec 02 15:59:03 crc kubenswrapper[4900]: I1202 15:59:03.664181 4900 generic.go:334] "Generic (PLEG): container finished" podID="c26b8eb5-c400-47f5-ae09-765101884ea4" containerID="4129eee7dfb7a6d005bd295e313ebac47eb34e2d2914709713e038825d13818f" exitCode=0 Dec 02 15:59:03 crc kubenswrapper[4900]: I1202 15:59:03.664231 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" event={"ID":"c26b8eb5-c400-47f5-ae09-765101884ea4","Type":"ContainerDied","Data":"4129eee7dfb7a6d005bd295e313ebac47eb34e2d2914709713e038825d13818f"} Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.163045 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.191384 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-0\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.191449 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzq4\" (UniqueName: \"kubernetes.io/projected/c26b8eb5-c400-47f5-ae09-765101884ea4-kube-api-access-nmzq4\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.191474 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-1\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.191497 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ssh-key\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.214001 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26b8eb5-c400-47f5-ae09-765101884ea4-kube-api-access-nmzq4" (OuterVolumeSpecName: "kube-api-access-nmzq4") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "kube-api-access-nmzq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.244216 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.250008 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.259030 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293399 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-combined-ca-bundle\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293471 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-1\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293519 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-inventory\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293547 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-0\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293627 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ceph\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293934 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-1\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.293987 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-0\") pod \"c26b8eb5-c400-47f5-ae09-765101884ea4\" (UID: \"c26b8eb5-c400-47f5-ae09-765101884ea4\") " Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.294562 4900 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.294577 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzq4\" (UniqueName: \"kubernetes.io/projected/c26b8eb5-c400-47f5-ae09-765101884ea4-kube-api-access-nmzq4\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.294591 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.294606 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.298908 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ceph" (OuterVolumeSpecName: "ceph") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.301162 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.325351 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.327040 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.327266 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-inventory" (OuterVolumeSpecName: "inventory") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.329793 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.340359 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c26b8eb5-c400-47f5-ae09-765101884ea4" (UID: "c26b8eb5-c400-47f5-ae09-765101884ea4"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396805 4900 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396878 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396888 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396899 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396909 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396917 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c26b8eb5-c400-47f5-ae09-765101884ea4-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.396942 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c26b8eb5-c400-47f5-ae09-765101884ea4-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.683714 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" event={"ID":"c26b8eb5-c400-47f5-ae09-765101884ea4","Type":"ContainerDied","Data":"0f4dfec28c9980318511b663db7dda3a4234ded36233796a86affaf1cbfdda7e"} Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.683773 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4dfec28c9980318511b663db7dda3a4234ded36233796a86affaf1cbfdda7e" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.683872 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-prqhc" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.786767 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-njxk6"] Dec 02 15:59:05 crc kubenswrapper[4900]: E1202 15:59:05.787195 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26b8eb5-c400-47f5-ae09-765101884ea4" containerName="nova-cell1-openstack-openstack-cell1" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.787213 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26b8eb5-c400-47f5-ae09-765101884ea4" containerName="nova-cell1-openstack-openstack-cell1" Dec 02 15:59:05 crc kubenswrapper[4900]: E1202 15:59:05.787229 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="extract-content" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.787236 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="extract-content" Dec 02 15:59:05 crc kubenswrapper[4900]: E1202 15:59:05.787270 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="extract-utilities" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.787277 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="extract-utilities" Dec 02 15:59:05 crc kubenswrapper[4900]: E1202 15:59:05.787289 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="registry-server" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.787295 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="registry-server" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.787489 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26b8eb5-c400-47f5-ae09-765101884ea4" containerName="nova-cell1-openstack-openstack-cell1" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.787516 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="993bd8bd-db80-4c6a-8f4f-e57a55a2c581" containerName="registry-server" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.788236 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.792002 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.792289 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.792314 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.792622 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.792716 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.811173 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-njxk6"] Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.910837 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-inventory\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.910895 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.911378 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdsbv\" (UniqueName: \"kubernetes.io/projected/4014ba6f-1f86-42af-8f44-4ce633ea6288-kube-api-access-xdsbv\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.911434 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.911904 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceph\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.912105 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.912503 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ssh-key\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:05 crc kubenswrapper[4900]: I1202 15:59:05.912587 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016441 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ssh-key\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016507 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016556 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-inventory\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016580 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016629 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdsbv\" (UniqueName: \"kubernetes.io/projected/4014ba6f-1f86-42af-8f44-4ce633ea6288-kube-api-access-xdsbv\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016692 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016769 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceph\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.016810 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.020820 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.020901 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.022171 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.023583 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.023664 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceph\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.025458 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ssh-key\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.025926 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-inventory\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.034984 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdsbv\" (UniqueName: \"kubernetes.io/projected/4014ba6f-1f86-42af-8f44-4ce633ea6288-kube-api-access-xdsbv\") pod \"telemetry-openstack-openstack-cell1-njxk6\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.170926 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 15:59:06 crc kubenswrapper[4900]: I1202 15:59:06.769020 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-njxk6"] Dec 02 15:59:06 crc kubenswrapper[4900]: W1202 15:59:06.771487 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4014ba6f_1f86_42af_8f44_4ce633ea6288.slice/crio-f5e4ccf146a5e9530f0ebcea1517e5d3815c7445e29f540b40f7a156fd5ce58e WatchSource:0}: Error finding container f5e4ccf146a5e9530f0ebcea1517e5d3815c7445e29f540b40f7a156fd5ce58e: Status 404 returned error can't find the container with id f5e4ccf146a5e9530f0ebcea1517e5d3815c7445e29f540b40f7a156fd5ce58e Dec 02 15:59:07 crc kubenswrapper[4900]: I1202 15:59:07.717415 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" event={"ID":"4014ba6f-1f86-42af-8f44-4ce633ea6288","Type":"ContainerStarted","Data":"f5e4ccf146a5e9530f0ebcea1517e5d3815c7445e29f540b40f7a156fd5ce58e"} Dec 02 15:59:08 crc kubenswrapper[4900]: I1202 15:59:08.733849 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" event={"ID":"4014ba6f-1f86-42af-8f44-4ce633ea6288","Type":"ContainerStarted","Data":"639dbb06520aac83a237f53a894d21e50a6d4347dae6c6780c73457bf0c48dc3"} Dec 02 15:59:08 crc kubenswrapper[4900]: I1202 15:59:08.759094 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" podStartSLOduration=2.950605848 podStartE2EDuration="3.759074758s" podCreationTimestamp="2025-12-02 15:59:05 +0000 UTC" firstStartedPulling="2025-12-02 15:59:06.77409522 +0000 UTC m=+8192.189909071" lastFinishedPulling="2025-12-02 15:59:07.58256414 +0000 UTC m=+8192.998377981" observedRunningTime="2025-12-02 15:59:08.74994387 +0000 UTC m=+8194.165757721" watchObservedRunningTime="2025-12-02 15:59:08.759074758 +0000 UTC m=+8194.174888609" Dec 02 15:59:45 crc kubenswrapper[4900]: I1202 15:59:45.116875 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:59:45 crc kubenswrapper[4900]: I1202 15:59:45.117397 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.174531 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz"] Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.176543 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.179689 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.179832 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.193464 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz"] Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.313984 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c84e39c-5a5e-4893-aa77-379749ad3047-config-volume\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.314432 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7p8\" (UniqueName: \"kubernetes.io/projected/6c84e39c-5a5e-4893-aa77-379749ad3047-kube-api-access-mw7p8\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.314587 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c84e39c-5a5e-4893-aa77-379749ad3047-secret-volume\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.416827 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c84e39c-5a5e-4893-aa77-379749ad3047-secret-volume\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.416946 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c84e39c-5a5e-4893-aa77-379749ad3047-config-volume\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.417001 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7p8\" (UniqueName: \"kubernetes.io/projected/6c84e39c-5a5e-4893-aa77-379749ad3047-kube-api-access-mw7p8\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.419351 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c84e39c-5a5e-4893-aa77-379749ad3047-config-volume\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.425315 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c84e39c-5a5e-4893-aa77-379749ad3047-secret-volume\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.441366 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7p8\" (UniqueName: \"kubernetes.io/projected/6c84e39c-5a5e-4893-aa77-379749ad3047-kube-api-access-mw7p8\") pod \"collect-profiles-29411520-jdqpz\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.499232 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:00 crc kubenswrapper[4900]: I1202 16:00:00.975053 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz"] Dec 02 16:00:01 crc kubenswrapper[4900]: I1202 16:00:01.463012 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" event={"ID":"6c84e39c-5a5e-4893-aa77-379749ad3047","Type":"ContainerStarted","Data":"72940db454315fe43f28f277711eaa3db60547e940e0ce0e07b53e81d9f71460"} Dec 02 16:00:01 crc kubenswrapper[4900]: I1202 16:00:01.463259 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" event={"ID":"6c84e39c-5a5e-4893-aa77-379749ad3047","Type":"ContainerStarted","Data":"6ce23a5b9ec213c58d34e31958e38a0671b05bc3d4d90eb9d768d7e67b7ee0d5"} Dec 02 16:00:01 crc kubenswrapper[4900]: I1202 16:00:01.506931 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" podStartSLOduration=1.506908159 podStartE2EDuration="1.506908159s" podCreationTimestamp="2025-12-02 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:00:01.489324471 +0000 UTC m=+8246.905138332" watchObservedRunningTime="2025-12-02 16:00:01.506908159 +0000 UTC m=+8246.922722010" Dec 02 16:00:02 crc kubenswrapper[4900]: I1202 16:00:02.474566 4900 generic.go:334] "Generic (PLEG): container finished" podID="6c84e39c-5a5e-4893-aa77-379749ad3047" containerID="72940db454315fe43f28f277711eaa3db60547e940e0ce0e07b53e81d9f71460" exitCode=0 Dec 02 16:00:02 crc kubenswrapper[4900]: I1202 16:00:02.474932 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" event={"ID":"6c84e39c-5a5e-4893-aa77-379749ad3047","Type":"ContainerDied","Data":"72940db454315fe43f28f277711eaa3db60547e940e0ce0e07b53e81d9f71460"} Dec 02 16:00:03 crc kubenswrapper[4900]: I1202 16:00:03.894988 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.004155 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c84e39c-5a5e-4893-aa77-379749ad3047-secret-volume\") pod \"6c84e39c-5a5e-4893-aa77-379749ad3047\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.004200 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw7p8\" (UniqueName: \"kubernetes.io/projected/6c84e39c-5a5e-4893-aa77-379749ad3047-kube-api-access-mw7p8\") pod \"6c84e39c-5a5e-4893-aa77-379749ad3047\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.004427 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c84e39c-5a5e-4893-aa77-379749ad3047-config-volume\") pod \"6c84e39c-5a5e-4893-aa77-379749ad3047\" (UID: \"6c84e39c-5a5e-4893-aa77-379749ad3047\") " Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.005045 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c84e39c-5a5e-4893-aa77-379749ad3047-config-volume" (OuterVolumeSpecName: "config-volume") pod "6c84e39c-5a5e-4893-aa77-379749ad3047" (UID: "6c84e39c-5a5e-4893-aa77-379749ad3047"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.010062 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c84e39c-5a5e-4893-aa77-379749ad3047-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6c84e39c-5a5e-4893-aa77-379749ad3047" (UID: "6c84e39c-5a5e-4893-aa77-379749ad3047"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.010724 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c84e39c-5a5e-4893-aa77-379749ad3047-kube-api-access-mw7p8" (OuterVolumeSpecName: "kube-api-access-mw7p8") pod "6c84e39c-5a5e-4893-aa77-379749ad3047" (UID: "6c84e39c-5a5e-4893-aa77-379749ad3047"). InnerVolumeSpecName "kube-api-access-mw7p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.107306 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c84e39c-5a5e-4893-aa77-379749ad3047-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.107353 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw7p8\" (UniqueName: \"kubernetes.io/projected/6c84e39c-5a5e-4893-aa77-379749ad3047-kube-api-access-mw7p8\") on node \"crc\" DevicePath \"\"" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.107367 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c84e39c-5a5e-4893-aa77-379749ad3047-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.499314 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" event={"ID":"6c84e39c-5a5e-4893-aa77-379749ad3047","Type":"ContainerDied","Data":"6ce23a5b9ec213c58d34e31958e38a0671b05bc3d4d90eb9d768d7e67b7ee0d5"} Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.499776 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce23a5b9ec213c58d34e31958e38a0671b05bc3d4d90eb9d768d7e67b7ee0d5" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.499408 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411520-jdqpz" Dec 02 16:00:04 crc kubenswrapper[4900]: I1202 16:00:04.991837 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb"] Dec 02 16:00:05 crc kubenswrapper[4900]: I1202 16:00:05.006348 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-927wb"] Dec 02 16:00:06 crc kubenswrapper[4900]: I1202 16:00:06.922980 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ce3395-7fb2-44c6-a046-01596df95ec7" path="/var/lib/kubelet/pods/25ce3395-7fb2-44c6-a046-01596df95ec7/volumes" Dec 02 16:00:13 crc kubenswrapper[4900]: I1202 16:00:13.953835 4900 scope.go:117] "RemoveContainer" containerID="9edb6d4ef15b81802da9cb4bc1599eaaf40d8980b0786ae62d36f74515be677b" Dec 02 16:00:15 crc kubenswrapper[4900]: I1202 16:00:15.116548 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:00:15 crc kubenswrapper[4900]: I1202 16:00:15.117150 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:00:35 crc kubenswrapper[4900]: I1202 16:00:35.934949 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8xvm4"] Dec 02 16:00:35 crc kubenswrapper[4900]: E1202 16:00:35.935892 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c84e39c-5a5e-4893-aa77-379749ad3047" containerName="collect-profiles" Dec 02 16:00:35 crc kubenswrapper[4900]: I1202 16:00:35.935905 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c84e39c-5a5e-4893-aa77-379749ad3047" containerName="collect-profiles" Dec 02 16:00:35 crc kubenswrapper[4900]: I1202 16:00:35.936118 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c84e39c-5a5e-4893-aa77-379749ad3047" containerName="collect-profiles" Dec 02 16:00:35 crc kubenswrapper[4900]: I1202 16:00:35.937935 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:35 crc kubenswrapper[4900]: I1202 16:00:35.964524 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xvm4"] Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.029340 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2zr\" (UniqueName: \"kubernetes.io/projected/50be2382-c2c7-4d7b-b781-a64fc7481587-kube-api-access-pt2zr\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.029748 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-catalog-content\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.030005 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-utilities\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.132191 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-utilities\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.132357 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2zr\" (UniqueName: \"kubernetes.io/projected/50be2382-c2c7-4d7b-b781-a64fc7481587-kube-api-access-pt2zr\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.132394 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-catalog-content\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.132673 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-utilities\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.132933 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-catalog-content\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.150135 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2zr\" (UniqueName: \"kubernetes.io/projected/50be2382-c2c7-4d7b-b781-a64fc7481587-kube-api-access-pt2zr\") pod \"redhat-operators-8xvm4\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.302491 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.803674 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xvm4"] Dec 02 16:00:36 crc kubenswrapper[4900]: I1202 16:00:36.833620 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerStarted","Data":"2cf71b2b22cbe92ad53f94f9ac561e011f8848a97714801dcf6b95042a6c1f71"} Dec 02 16:00:37 crc kubenswrapper[4900]: I1202 16:00:37.854324 4900 generic.go:334] "Generic (PLEG): container finished" podID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerID="a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79" exitCode=0 Dec 02 16:00:37 crc kubenswrapper[4900]: I1202 16:00:37.854374 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerDied","Data":"a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79"} Dec 02 16:00:37 crc kubenswrapper[4900]: I1202 16:00:37.857351 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 16:00:40 crc kubenswrapper[4900]: I1202 16:00:40.891501 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerStarted","Data":"8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29"} Dec 02 16:00:43 crc kubenswrapper[4900]: I1202 16:00:43.924198 4900 generic.go:334] "Generic (PLEG): container finished" podID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerID="8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29" exitCode=0 Dec 02 16:00:43 crc kubenswrapper[4900]: I1202 16:00:43.924742 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerDied","Data":"8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29"} Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.116367 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.116425 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.116470 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.117303 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7e9b4667adc2d9a1e9f8bc11fceab02ccb34f30d29faab5adf246cc2018145e"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.117378 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://e7e9b4667adc2d9a1e9f8bc11fceab02ccb34f30d29faab5adf246cc2018145e" gracePeriod=600 Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.948118 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="e7e9b4667adc2d9a1e9f8bc11fceab02ccb34f30d29faab5adf246cc2018145e" exitCode=0 Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.948214 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"e7e9b4667adc2d9a1e9f8bc11fceab02ccb34f30d29faab5adf246cc2018145e"} Dec 02 16:00:45 crc kubenswrapper[4900]: I1202 16:00:45.948683 4900 scope.go:117] "RemoveContainer" containerID="83900aef06f6da4d812af59a9bf0db76af5a11e42330f5144cec063c84f3552a" Dec 02 16:00:46 crc kubenswrapper[4900]: I1202 16:00:46.963020 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb"} Dec 02 16:00:47 crc kubenswrapper[4900]: I1202 16:00:47.975555 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerStarted","Data":"332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96"} Dec 02 16:00:48 crc kubenswrapper[4900]: I1202 16:00:48.003319 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8xvm4" podStartSLOduration=4.075243663 podStartE2EDuration="13.003292991s" podCreationTimestamp="2025-12-02 16:00:35 +0000 UTC" firstStartedPulling="2025-12-02 16:00:37.857067123 +0000 UTC m=+8283.272880974" lastFinishedPulling="2025-12-02 16:00:46.785116441 +0000 UTC m=+8292.200930302" observedRunningTime="2025-12-02 16:00:47.998092923 +0000 UTC m=+8293.413906774" watchObservedRunningTime="2025-12-02 16:00:48.003292991 +0000 UTC m=+8293.419106852" Dec 02 16:00:56 crc kubenswrapper[4900]: I1202 16:00:56.302966 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:56 crc kubenswrapper[4900]: I1202 16:00:56.303904 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:56 crc kubenswrapper[4900]: I1202 16:00:56.448929 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:57 crc kubenswrapper[4900]: I1202 16:00:57.122757 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:57 crc kubenswrapper[4900]: I1202 16:00:57.167999 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xvm4"] Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.090012 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8xvm4" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="registry-server" containerID="cri-o://332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96" gracePeriod=2 Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.596867 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.673136 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-catalog-content\") pod \"50be2382-c2c7-4d7b-b781-a64fc7481587\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.673296 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt2zr\" (UniqueName: \"kubernetes.io/projected/50be2382-c2c7-4d7b-b781-a64fc7481587-kube-api-access-pt2zr\") pod \"50be2382-c2c7-4d7b-b781-a64fc7481587\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.673374 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-utilities\") pod \"50be2382-c2c7-4d7b-b781-a64fc7481587\" (UID: \"50be2382-c2c7-4d7b-b781-a64fc7481587\") " Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.674611 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-utilities" (OuterVolumeSpecName: "utilities") pod "50be2382-c2c7-4d7b-b781-a64fc7481587" (UID: "50be2382-c2c7-4d7b-b781-a64fc7481587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.679988 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50be2382-c2c7-4d7b-b781-a64fc7481587-kube-api-access-pt2zr" (OuterVolumeSpecName: "kube-api-access-pt2zr") pod "50be2382-c2c7-4d7b-b781-a64fc7481587" (UID: "50be2382-c2c7-4d7b-b781-a64fc7481587"). InnerVolumeSpecName "kube-api-access-pt2zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.776076 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt2zr\" (UniqueName: \"kubernetes.io/projected/50be2382-c2c7-4d7b-b781-a64fc7481587-kube-api-access-pt2zr\") on node \"crc\" DevicePath \"\"" Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.776113 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.784398 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50be2382-c2c7-4d7b-b781-a64fc7481587" (UID: "50be2382-c2c7-4d7b-b781-a64fc7481587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:00:59 crc kubenswrapper[4900]: I1202 16:00:59.878088 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50be2382-c2c7-4d7b-b781-a64fc7481587-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.102739 4900 generic.go:334] "Generic (PLEG): container finished" podID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerID="332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96" exitCode=0 Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.102808 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xvm4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.102827 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerDied","Data":"332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96"} Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.104294 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xvm4" event={"ID":"50be2382-c2c7-4d7b-b781-a64fc7481587","Type":"ContainerDied","Data":"2cf71b2b22cbe92ad53f94f9ac561e011f8848a97714801dcf6b95042a6c1f71"} Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.104376 4900 scope.go:117] "RemoveContainer" containerID="332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.159720 4900 scope.go:117] "RemoveContainer" containerID="8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.172781 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xvm4"] Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.185399 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29411521-64bz4"] Dec 02 16:01:00 crc kubenswrapper[4900]: E1202 16:01:00.185944 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="registry-server" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.185962 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="registry-server" Dec 02 16:01:00 crc kubenswrapper[4900]: E1202 16:01:00.185981 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="extract-utilities" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.185987 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="extract-utilities" Dec 02 16:01:00 crc kubenswrapper[4900]: E1202 16:01:00.186014 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="extract-content" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.186020 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="extract-content" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.186202 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" containerName="registry-server" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.186980 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.215596 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8xvm4"] Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.215674 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411521-64bz4"] Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.276506 4900 scope.go:117] "RemoveContainer" containerID="a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.288171 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswvc\" (UniqueName: \"kubernetes.io/projected/defcc634-cea1-4350-9e5f-f714f766b6c8-kube-api-access-mswvc\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.288267 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-fernet-keys\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.288391 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-config-data\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.288446 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-combined-ca-bundle\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.313267 4900 scope.go:117] "RemoveContainer" containerID="332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96" Dec 02 16:01:00 crc kubenswrapper[4900]: E1202 16:01:00.313627 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96\": container with ID starting with 332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96 not found: ID does not exist" containerID="332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.313675 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96"} err="failed to get container status \"332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96\": rpc error: code = NotFound desc = could not find container \"332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96\": container with ID starting with 332c02249dedd1b98884668b17cba5934adb2823920a7eb66458bd316c857d96 not found: ID does not exist" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.313702 4900 scope.go:117] "RemoveContainer" containerID="8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29" Dec 02 16:01:00 crc kubenswrapper[4900]: E1202 16:01:00.313931 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29\": container with ID starting with 8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29 not found: ID does not exist" containerID="8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.313960 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29"} err="failed to get container status \"8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29\": rpc error: code = NotFound desc = could not find container \"8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29\": container with ID starting with 8e771e7a10c87069b60895110d60963a05f847ba4a043809b94edfb04568ad29 not found: ID does not exist" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.313980 4900 scope.go:117] "RemoveContainer" containerID="a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79" Dec 02 16:01:00 crc kubenswrapper[4900]: E1202 16:01:00.314222 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79\": container with ID starting with a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79 not found: ID does not exist" containerID="a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.314241 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79"} err="failed to get container status \"a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79\": rpc error: code = NotFound desc = could not find container \"a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79\": container with ID starting with a222268b4c56d1bf55905860e8480f8e2131bd216a09bf99ebcd4a553ee6ca79 not found: ID does not exist" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.397004 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-fernet-keys\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.397254 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-config-data\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.397333 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-combined-ca-bundle\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.397426 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswvc\" (UniqueName: \"kubernetes.io/projected/defcc634-cea1-4350-9e5f-f714f766b6c8-kube-api-access-mswvc\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.410154 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-fernet-keys\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.420545 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-config-data\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.428287 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswvc\" (UniqueName: \"kubernetes.io/projected/defcc634-cea1-4350-9e5f-f714f766b6c8-kube-api-access-mswvc\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.428483 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-combined-ca-bundle\") pod \"keystone-cron-29411521-64bz4\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.651161 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:00 crc kubenswrapper[4900]: I1202 16:01:00.922730 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50be2382-c2c7-4d7b-b781-a64fc7481587" path="/var/lib/kubelet/pods/50be2382-c2c7-4d7b-b781-a64fc7481587/volumes" Dec 02 16:01:01 crc kubenswrapper[4900]: I1202 16:01:01.119881 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411521-64bz4"] Dec 02 16:01:02 crc kubenswrapper[4900]: I1202 16:01:02.130211 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411521-64bz4" event={"ID":"defcc634-cea1-4350-9e5f-f714f766b6c8","Type":"ContainerStarted","Data":"7712f82c94c41d777cf29a9845fd072febb67ab156886cfbcc4738c82eea1a32"} Dec 02 16:01:02 crc kubenswrapper[4900]: I1202 16:01:02.130557 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411521-64bz4" event={"ID":"defcc634-cea1-4350-9e5f-f714f766b6c8","Type":"ContainerStarted","Data":"b5ef96a2539fc8b789ca6598698c6bfd8b2397fe419e15d67c218c30b24529e6"} Dec 02 16:01:02 crc kubenswrapper[4900]: I1202 16:01:02.151024 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29411521-64bz4" podStartSLOduration=2.151001591 podStartE2EDuration="2.151001591s" podCreationTimestamp="2025-12-02 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:01:02.148026006 +0000 UTC m=+8307.563839867" watchObservedRunningTime="2025-12-02 16:01:02.151001591 +0000 UTC m=+8307.566815462" Dec 02 16:01:05 crc kubenswrapper[4900]: I1202 16:01:05.163257 4900 generic.go:334] "Generic (PLEG): container finished" podID="defcc634-cea1-4350-9e5f-f714f766b6c8" containerID="7712f82c94c41d777cf29a9845fd072febb67ab156886cfbcc4738c82eea1a32" exitCode=0 Dec 02 16:01:05 crc kubenswrapper[4900]: I1202 16:01:05.163333 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411521-64bz4" event={"ID":"defcc634-cea1-4350-9e5f-f714f766b6c8","Type":"ContainerDied","Data":"7712f82c94c41d777cf29a9845fd072febb67ab156886cfbcc4738c82eea1a32"} Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.540348 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.675624 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-fernet-keys\") pod \"defcc634-cea1-4350-9e5f-f714f766b6c8\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.675988 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-combined-ca-bundle\") pod \"defcc634-cea1-4350-9e5f-f714f766b6c8\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.676156 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswvc\" (UniqueName: \"kubernetes.io/projected/defcc634-cea1-4350-9e5f-f714f766b6c8-kube-api-access-mswvc\") pod \"defcc634-cea1-4350-9e5f-f714f766b6c8\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.676378 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-config-data\") pod \"defcc634-cea1-4350-9e5f-f714f766b6c8\" (UID: \"defcc634-cea1-4350-9e5f-f714f766b6c8\") " Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.695802 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "defcc634-cea1-4350-9e5f-f714f766b6c8" (UID: "defcc634-cea1-4350-9e5f-f714f766b6c8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.695918 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defcc634-cea1-4350-9e5f-f714f766b6c8-kube-api-access-mswvc" (OuterVolumeSpecName: "kube-api-access-mswvc") pod "defcc634-cea1-4350-9e5f-f714f766b6c8" (UID: "defcc634-cea1-4350-9e5f-f714f766b6c8"). InnerVolumeSpecName "kube-api-access-mswvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.720080 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defcc634-cea1-4350-9e5f-f714f766b6c8" (UID: "defcc634-cea1-4350-9e5f-f714f766b6c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.737587 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-config-data" (OuterVolumeSpecName: "config-data") pod "defcc634-cea1-4350-9e5f-f714f766b6c8" (UID: "defcc634-cea1-4350-9e5f-f714f766b6c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.778922 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.779042 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswvc\" (UniqueName: \"kubernetes.io/projected/defcc634-cea1-4350-9e5f-f714f766b6c8-kube-api-access-mswvc\") on node \"crc\" DevicePath \"\"" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.779137 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 16:01:06 crc kubenswrapper[4900]: I1202 16:01:06.779202 4900 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/defcc634-cea1-4350-9e5f-f714f766b6c8-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 16:01:07 crc kubenswrapper[4900]: I1202 16:01:07.185655 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411521-64bz4" event={"ID":"defcc634-cea1-4350-9e5f-f714f766b6c8","Type":"ContainerDied","Data":"b5ef96a2539fc8b789ca6598698c6bfd8b2397fe419e15d67c218c30b24529e6"} Dec 02 16:01:07 crc kubenswrapper[4900]: I1202 16:01:07.185701 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ef96a2539fc8b789ca6598698c6bfd8b2397fe419e15d67c218c30b24529e6" Dec 02 16:01:07 crc kubenswrapper[4900]: I1202 16:01:07.185772 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411521-64bz4" Dec 02 16:03:15 crc kubenswrapper[4900]: I1202 16:03:15.117129 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:03:15 crc kubenswrapper[4900]: I1202 16:03:15.117801 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:03:17 crc kubenswrapper[4900]: I1202 16:03:17.537412 4900 generic.go:334] "Generic (PLEG): container finished" podID="4014ba6f-1f86-42af-8f44-4ce633ea6288" containerID="639dbb06520aac83a237f53a894d21e50a6d4347dae6c6780c73457bf0c48dc3" exitCode=0 Dec 02 16:03:17 crc kubenswrapper[4900]: I1202 16:03:17.537611 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" event={"ID":"4014ba6f-1f86-42af-8f44-4ce633ea6288","Type":"ContainerDied","Data":"639dbb06520aac83a237f53a894d21e50a6d4347dae6c6780c73457bf0c48dc3"} Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.032072 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164243 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-inventory\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164323 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-telemetry-combined-ca-bundle\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164352 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ssh-key\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164463 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-2\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164532 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-1\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164711 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceph\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164748 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdsbv\" (UniqueName: \"kubernetes.io/projected/4014ba6f-1f86-42af-8f44-4ce633ea6288-kube-api-access-xdsbv\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.164853 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-0\") pod \"4014ba6f-1f86-42af-8f44-4ce633ea6288\" (UID: \"4014ba6f-1f86-42af-8f44-4ce633ea6288\") " Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.169866 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceph" (OuterVolumeSpecName: "ceph") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.170862 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.171924 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4014ba6f-1f86-42af-8f44-4ce633ea6288-kube-api-access-xdsbv" (OuterVolumeSpecName: "kube-api-access-xdsbv") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "kube-api-access-xdsbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.196411 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.196709 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.197853 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.206979 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.207435 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-inventory" (OuterVolumeSpecName: "inventory") pod "4014ba6f-1f86-42af-8f44-4ce633ea6288" (UID: "4014ba6f-1f86-42af-8f44-4ce633ea6288"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267662 4900 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267699 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267714 4900 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267726 4900 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267741 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267756 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdsbv\" (UniqueName: \"kubernetes.io/projected/4014ba6f-1f86-42af-8f44-4ce633ea6288-kube-api-access-xdsbv\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267770 4900 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.267782 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4014ba6f-1f86-42af-8f44-4ce633ea6288-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.565283 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" event={"ID":"4014ba6f-1f86-42af-8f44-4ce633ea6288","Type":"ContainerDied","Data":"f5e4ccf146a5e9530f0ebcea1517e5d3815c7445e29f540b40f7a156fd5ce58e"} Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.565336 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e4ccf146a5e9530f0ebcea1517e5d3815c7445e29f540b40f7a156fd5ce58e" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.565740 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-njxk6" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.670215 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-28rxz"] Dec 02 16:03:19 crc kubenswrapper[4900]: E1202 16:03:19.670922 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defcc634-cea1-4350-9e5f-f714f766b6c8" containerName="keystone-cron" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.670940 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="defcc634-cea1-4350-9e5f-f714f766b6c8" containerName="keystone-cron" Dec 02 16:03:19 crc kubenswrapper[4900]: E1202 16:03:19.670962 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4014ba6f-1f86-42af-8f44-4ce633ea6288" containerName="telemetry-openstack-openstack-cell1" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.670968 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="4014ba6f-1f86-42af-8f44-4ce633ea6288" containerName="telemetry-openstack-openstack-cell1" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.671259 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="defcc634-cea1-4350-9e5f-f714f766b6c8" containerName="keystone-cron" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.671291 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="4014ba6f-1f86-42af-8f44-4ce633ea6288" containerName="telemetry-openstack-openstack-cell1" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.672005 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.675310 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.675533 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.675586 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.675614 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.675694 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.686282 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-28rxz"] Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.778584 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.778739 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7smx\" (UniqueName: \"kubernetes.io/projected/a0737da3-829f-4802-95aa-f42ae8546b75-kube-api-access-b7smx\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.778854 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.778894 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.778935 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.778972 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.880656 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7smx\" (UniqueName: \"kubernetes.io/projected/a0737da3-829f-4802-95aa-f42ae8546b75-kube-api-access-b7smx\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.880764 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.880796 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.880819 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.880846 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.881661 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.885166 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.885810 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.885958 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.886220 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.886522 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.904165 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7smx\" (UniqueName: \"kubernetes.io/projected/a0737da3-829f-4802-95aa-f42ae8546b75-kube-api-access-b7smx\") pod \"neutron-sriov-openstack-openstack-cell1-28rxz\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:19 crc kubenswrapper[4900]: I1202 16:03:19.995750 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:03:20 crc kubenswrapper[4900]: W1202 16:03:20.578040 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0737da3_829f_4802_95aa_f42ae8546b75.slice/crio-14c1317a3a02611c612a0ea40e7a84170540e9e84bd320e15c92a46d9f784b10 WatchSource:0}: Error finding container 14c1317a3a02611c612a0ea40e7a84170540e9e84bd320e15c92a46d9f784b10: Status 404 returned error can't find the container with id 14c1317a3a02611c612a0ea40e7a84170540e9e84bd320e15c92a46d9f784b10 Dec 02 16:03:20 crc kubenswrapper[4900]: I1202 16:03:20.585880 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-28rxz"] Dec 02 16:03:21 crc kubenswrapper[4900]: I1202 16:03:21.586356 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" event={"ID":"a0737da3-829f-4802-95aa-f42ae8546b75","Type":"ContainerStarted","Data":"14c1317a3a02611c612a0ea40e7a84170540e9e84bd320e15c92a46d9f784b10"} Dec 02 16:03:22 crc kubenswrapper[4900]: I1202 16:03:22.599564 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" event={"ID":"a0737da3-829f-4802-95aa-f42ae8546b75","Type":"ContainerStarted","Data":"6ac16565858c732a20f4805a705c68b64723776393ff63c080e3284a79e68bdf"} Dec 02 16:03:22 crc kubenswrapper[4900]: I1202 16:03:22.623226 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" podStartSLOduration=2.853558338 podStartE2EDuration="3.623205417s" podCreationTimestamp="2025-12-02 16:03:19 +0000 UTC" firstStartedPulling="2025-12-02 16:03:20.581609885 +0000 UTC m=+8445.997423756" lastFinishedPulling="2025-12-02 16:03:21.351256984 +0000 UTC m=+8446.767070835" observedRunningTime="2025-12-02 16:03:22.614707907 +0000 UTC m=+8448.030521798" watchObservedRunningTime="2025-12-02 16:03:22.623205417 +0000 UTC m=+8448.039019268" Dec 02 16:03:45 crc kubenswrapper[4900]: I1202 16:03:45.117359 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:03:45 crc kubenswrapper[4900]: I1202 16:03:45.117906 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:04:15 crc kubenswrapper[4900]: I1202 16:04:15.116748 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:04:15 crc kubenswrapper[4900]: I1202 16:04:15.117271 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:04:15 crc kubenswrapper[4900]: I1202 16:04:15.117318 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 16:04:15 crc kubenswrapper[4900]: I1202 16:04:15.118085 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 16:04:15 crc kubenswrapper[4900]: I1202 16:04:15.118141 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" gracePeriod=600 Dec 02 16:04:15 crc kubenswrapper[4900]: E1202 16:04:15.264750 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:04:16 crc kubenswrapper[4900]: I1202 16:04:16.255778 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" exitCode=0 Dec 02 16:04:16 crc kubenswrapper[4900]: I1202 16:04:16.255815 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb"} Dec 02 16:04:16 crc kubenswrapper[4900]: I1202 16:04:16.256200 4900 scope.go:117] "RemoveContainer" containerID="e7e9b4667adc2d9a1e9f8bc11fceab02ccb34f30d29faab5adf246cc2018145e" Dec 02 16:04:16 crc kubenswrapper[4900]: I1202 16:04:16.256869 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:04:16 crc kubenswrapper[4900]: E1202 16:04:16.257185 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:04:28 crc kubenswrapper[4900]: I1202 16:04:28.910537 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:04:28 crc kubenswrapper[4900]: E1202 16:04:28.911817 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:04:43 crc kubenswrapper[4900]: I1202 16:04:43.910773 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:04:43 crc kubenswrapper[4900]: E1202 16:04:43.911984 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:04:57 crc kubenswrapper[4900]: I1202 16:04:57.910324 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:04:57 crc kubenswrapper[4900]: E1202 16:04:57.911354 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:05:07 crc kubenswrapper[4900]: I1202 16:05:07.873910 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hhvb4"] Dec 02 16:05:07 crc kubenswrapper[4900]: I1202 16:05:07.877550 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:07 crc kubenswrapper[4900]: I1202 16:05:07.894445 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhvb4"] Dec 02 16:05:07 crc kubenswrapper[4900]: I1202 16:05:07.944262 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-catalog-content\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:07 crc kubenswrapper[4900]: I1202 16:05:07.944393 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzk9\" (UniqueName: \"kubernetes.io/projected/68f5a474-db7f-424b-8e14-ae51ea32e591-kube-api-access-hhzk9\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:07 crc kubenswrapper[4900]: I1202 16:05:07.944443 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-utilities\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.046660 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-utilities\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.046836 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-catalog-content\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.046950 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzk9\" (UniqueName: \"kubernetes.io/projected/68f5a474-db7f-424b-8e14-ae51ea32e591-kube-api-access-hhzk9\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.047846 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-utilities\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.048466 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-catalog-content\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.086074 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzk9\" (UniqueName: \"kubernetes.io/projected/68f5a474-db7f-424b-8e14-ae51ea32e591-kube-api-access-hhzk9\") pod \"redhat-marketplace-hhvb4\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.214073 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.777378 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhvb4"] Dec 02 16:05:08 crc kubenswrapper[4900]: I1202 16:05:08.930092 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhvb4" event={"ID":"68f5a474-db7f-424b-8e14-ae51ea32e591","Type":"ContainerStarted","Data":"fa67ed33c09aee2e8434ca421d113d819a27f7a894aa051941e2298858e31322"} Dec 02 16:05:09 crc kubenswrapper[4900]: I1202 16:05:09.939908 4900 generic.go:334] "Generic (PLEG): container finished" podID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerID="468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f" exitCode=0 Dec 02 16:05:09 crc kubenswrapper[4900]: I1202 16:05:09.940071 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhvb4" event={"ID":"68f5a474-db7f-424b-8e14-ae51ea32e591","Type":"ContainerDied","Data":"468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f"} Dec 02 16:05:11 crc kubenswrapper[4900]: I1202 16:05:11.910557 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:05:11 crc kubenswrapper[4900]: E1202 16:05:11.911622 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:05:11 crc kubenswrapper[4900]: I1202 16:05:11.969219 4900 generic.go:334] "Generic (PLEG): container finished" podID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerID="6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9" exitCode=0 Dec 02 16:05:11 crc kubenswrapper[4900]: I1202 16:05:11.969267 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhvb4" event={"ID":"68f5a474-db7f-424b-8e14-ae51ea32e591","Type":"ContainerDied","Data":"6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9"} Dec 02 16:05:12 crc kubenswrapper[4900]: I1202 16:05:12.986000 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhvb4" event={"ID":"68f5a474-db7f-424b-8e14-ae51ea32e591","Type":"ContainerStarted","Data":"ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1"} Dec 02 16:05:13 crc kubenswrapper[4900]: I1202 16:05:13.027867 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hhvb4" podStartSLOduration=3.489279277 podStartE2EDuration="6.027834601s" podCreationTimestamp="2025-12-02 16:05:07 +0000 UTC" firstStartedPulling="2025-12-02 16:05:09.944485759 +0000 UTC m=+8555.360299610" lastFinishedPulling="2025-12-02 16:05:12.483041083 +0000 UTC m=+8557.898854934" observedRunningTime="2025-12-02 16:05:13.009402748 +0000 UTC m=+8558.425216619" watchObservedRunningTime="2025-12-02 16:05:13.027834601 +0000 UTC m=+8558.443648492" Dec 02 16:05:14 crc kubenswrapper[4900]: I1202 16:05:14.163750 4900 scope.go:117] "RemoveContainer" containerID="573f232d28e7a5a85fc31c63fa12b0400a5c2db1ac3fdb50450f7b7ad103ffb1" Dec 02 16:05:14 crc kubenswrapper[4900]: I1202 16:05:14.187654 4900 scope.go:117] "RemoveContainer" containerID="3b9ae331daba5093b1f501570323dee1daae93fa6a645a070d1802bc189b20c4" Dec 02 16:05:14 crc kubenswrapper[4900]: I1202 16:05:14.211530 4900 scope.go:117] "RemoveContainer" containerID="ab43552c59a0dfb6625ca51d49b714e92b6cef8fe33ebd28153d45efc4df137c" Dec 02 16:05:18 crc kubenswrapper[4900]: I1202 16:05:18.214630 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:18 crc kubenswrapper[4900]: I1202 16:05:18.215129 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:18 crc kubenswrapper[4900]: I1202 16:05:18.287527 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:19 crc kubenswrapper[4900]: I1202 16:05:19.159268 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:19 crc kubenswrapper[4900]: I1202 16:05:19.232228 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhvb4"] Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.127311 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hhvb4" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="registry-server" containerID="cri-o://ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1" gracePeriod=2 Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.684502 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.762703 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhzk9\" (UniqueName: \"kubernetes.io/projected/68f5a474-db7f-424b-8e14-ae51ea32e591-kube-api-access-hhzk9\") pod \"68f5a474-db7f-424b-8e14-ae51ea32e591\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.762765 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-utilities\") pod \"68f5a474-db7f-424b-8e14-ae51ea32e591\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.762876 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-catalog-content\") pod \"68f5a474-db7f-424b-8e14-ae51ea32e591\" (UID: \"68f5a474-db7f-424b-8e14-ae51ea32e591\") " Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.764063 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-utilities" (OuterVolumeSpecName: "utilities") pod "68f5a474-db7f-424b-8e14-ae51ea32e591" (UID: "68f5a474-db7f-424b-8e14-ae51ea32e591"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.773969 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f5a474-db7f-424b-8e14-ae51ea32e591-kube-api-access-hhzk9" (OuterVolumeSpecName: "kube-api-access-hhzk9") pod "68f5a474-db7f-424b-8e14-ae51ea32e591" (UID: "68f5a474-db7f-424b-8e14-ae51ea32e591"). InnerVolumeSpecName "kube-api-access-hhzk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.776451 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.776511 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhzk9\" (UniqueName: \"kubernetes.io/projected/68f5a474-db7f-424b-8e14-ae51ea32e591-kube-api-access-hhzk9\") on node \"crc\" DevicePath \"\"" Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.797714 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68f5a474-db7f-424b-8e14-ae51ea32e591" (UID: "68f5a474-db7f-424b-8e14-ae51ea32e591"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:05:21 crc kubenswrapper[4900]: I1202 16:05:21.879365 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68f5a474-db7f-424b-8e14-ae51ea32e591-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.140884 4900 generic.go:334] "Generic (PLEG): container finished" podID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerID="ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1" exitCode=0 Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.140992 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhvb4" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.140978 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhvb4" event={"ID":"68f5a474-db7f-424b-8e14-ae51ea32e591","Type":"ContainerDied","Data":"ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1"} Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.142197 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhvb4" event={"ID":"68f5a474-db7f-424b-8e14-ae51ea32e591","Type":"ContainerDied","Data":"fa67ed33c09aee2e8434ca421d113d819a27f7a894aa051941e2298858e31322"} Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.142257 4900 scope.go:117] "RemoveContainer" containerID="ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.178785 4900 scope.go:117] "RemoveContainer" containerID="6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.180812 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhvb4"] Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.193778 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhvb4"] Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.209894 4900 scope.go:117] "RemoveContainer" containerID="468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.259385 4900 scope.go:117] "RemoveContainer" containerID="ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1" Dec 02 16:05:22 crc kubenswrapper[4900]: E1202 16:05:22.259955 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1\": container with ID starting with ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1 not found: ID does not exist" containerID="ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.259991 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1"} err="failed to get container status \"ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1\": rpc error: code = NotFound desc = could not find container \"ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1\": container with ID starting with ed5f62b8e902fddc4a22daa4ab220e9ef8670da82bfdf6b396d39d0cfd3271a1 not found: ID does not exist" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.260020 4900 scope.go:117] "RemoveContainer" containerID="6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9" Dec 02 16:05:22 crc kubenswrapper[4900]: E1202 16:05:22.260417 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9\": container with ID starting with 6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9 not found: ID does not exist" containerID="6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.260526 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9"} err="failed to get container status \"6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9\": rpc error: code = NotFound desc = could not find container \"6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9\": container with ID starting with 6dfb61c8e09e5f376d8989389c6382497f3200ab9159d0a843ad7de9a2a93cf9 not found: ID does not exist" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.260617 4900 scope.go:117] "RemoveContainer" containerID="468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f" Dec 02 16:05:22 crc kubenswrapper[4900]: E1202 16:05:22.261033 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f\": container with ID starting with 468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f not found: ID does not exist" containerID="468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.261108 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f"} err="failed to get container status \"468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f\": rpc error: code = NotFound desc = could not find container \"468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f\": container with ID starting with 468ebdac0daa66b8a8c22b5194426da5cf134651f0c986f68908e51fc295889f not found: ID does not exist" Dec 02 16:05:22 crc kubenswrapper[4900]: I1202 16:05:22.924551 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" path="/var/lib/kubelet/pods/68f5a474-db7f-424b-8e14-ae51ea32e591/volumes" Dec 02 16:05:24 crc kubenswrapper[4900]: I1202 16:05:24.917212 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:05:24 crc kubenswrapper[4900]: E1202 16:05:24.918302 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:05:39 crc kubenswrapper[4900]: I1202 16:05:39.912146 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:05:39 crc kubenswrapper[4900]: E1202 16:05:39.915037 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:05:53 crc kubenswrapper[4900]: I1202 16:05:53.910873 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:05:53 crc kubenswrapper[4900]: E1202 16:05:53.911989 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:06:05 crc kubenswrapper[4900]: I1202 16:06:05.911075 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:06:05 crc kubenswrapper[4900]: E1202 16:06:05.912257 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:06:19 crc kubenswrapper[4900]: I1202 16:06:19.910478 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:06:19 crc kubenswrapper[4900]: E1202 16:06:19.911475 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:06:34 crc kubenswrapper[4900]: I1202 16:06:34.917317 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:06:34 crc kubenswrapper[4900]: E1202 16:06:34.918124 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:06:47 crc kubenswrapper[4900]: I1202 16:06:47.911452 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:06:47 crc kubenswrapper[4900]: E1202 16:06:47.912270 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:07:02 crc kubenswrapper[4900]: I1202 16:07:02.912004 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:07:02 crc kubenswrapper[4900]: E1202 16:07:02.913253 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:07:14 crc kubenswrapper[4900]: I1202 16:07:14.925200 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:07:14 crc kubenswrapper[4900]: E1202 16:07:14.926013 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:07:28 crc kubenswrapper[4900]: I1202 16:07:28.911637 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:07:28 crc kubenswrapper[4900]: E1202 16:07:28.913031 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:07:43 crc kubenswrapper[4900]: I1202 16:07:43.910121 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:07:43 crc kubenswrapper[4900]: E1202 16:07:43.911273 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:07:58 crc kubenswrapper[4900]: I1202 16:07:58.910001 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:07:58 crc kubenswrapper[4900]: E1202 16:07:58.911067 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.910845 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:08:11 crc kubenswrapper[4900]: E1202 16:08:11.911948 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.925850 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbx5f"] Dec 02 16:08:11 crc kubenswrapper[4900]: E1202 16:08:11.926707 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="extract-utilities" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.926740 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="extract-utilities" Dec 02 16:08:11 crc kubenswrapper[4900]: E1202 16:08:11.926790 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="extract-content" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.926806 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="extract-content" Dec 02 16:08:11 crc kubenswrapper[4900]: E1202 16:08:11.926876 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="registry-server" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.926890 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="registry-server" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.927304 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f5a474-db7f-424b-8e14-ae51ea32e591" containerName="registry-server" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.930286 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.939993 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbx5f"] Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.945902 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-catalog-content\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.946293 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqqj\" (UniqueName: \"kubernetes.io/projected/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-kube-api-access-jdqqj\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:11 crc kubenswrapper[4900]: I1202 16:08:11.946490 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-utilities\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.048593 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-catalog-content\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.048765 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqqj\" (UniqueName: \"kubernetes.io/projected/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-kube-api-access-jdqqj\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.048879 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-utilities\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.049167 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-catalog-content\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.049328 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-utilities\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.068338 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqqj\" (UniqueName: \"kubernetes.io/projected/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-kube-api-access-jdqqj\") pod \"community-operators-nbx5f\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.292936 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:12 crc kubenswrapper[4900]: I1202 16:08:12.834932 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbx5f"] Dec 02 16:08:13 crc kubenswrapper[4900]: I1202 16:08:13.013994 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbx5f" event={"ID":"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c","Type":"ContainerStarted","Data":"8d0c2cebf26df724cdaca98d481e1ce02ad0584d6cc9f03dac100f679e2c97c1"} Dec 02 16:08:14 crc kubenswrapper[4900]: I1202 16:08:14.033754 4900 generic.go:334] "Generic (PLEG): container finished" podID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerID="12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6" exitCode=0 Dec 02 16:08:14 crc kubenswrapper[4900]: I1202 16:08:14.033996 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbx5f" event={"ID":"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c","Type":"ContainerDied","Data":"12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6"} Dec 02 16:08:14 crc kubenswrapper[4900]: I1202 16:08:14.036138 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 16:08:15 crc kubenswrapper[4900]: I1202 16:08:15.050079 4900 generic.go:334] "Generic (PLEG): container finished" podID="a0737da3-829f-4802-95aa-f42ae8546b75" containerID="6ac16565858c732a20f4805a705c68b64723776393ff63c080e3284a79e68bdf" exitCode=0 Dec 02 16:08:15 crc kubenswrapper[4900]: I1202 16:08:15.050341 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" event={"ID":"a0737da3-829f-4802-95aa-f42ae8546b75","Type":"ContainerDied","Data":"6ac16565858c732a20f4805a705c68b64723776393ff63c080e3284a79e68bdf"} Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.067823 4900 generic.go:334] "Generic (PLEG): container finished" podID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerID="ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a" exitCode=0 Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.067973 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbx5f" event={"ID":"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c","Type":"ContainerDied","Data":"ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a"} Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.628361 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.792277 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7smx\" (UniqueName: \"kubernetes.io/projected/a0737da3-829f-4802-95aa-f42ae8546b75-kube-api-access-b7smx\") pod \"a0737da3-829f-4802-95aa-f42ae8546b75\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.792323 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ceph\") pod \"a0737da3-829f-4802-95aa-f42ae8546b75\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.792382 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-agent-neutron-config-0\") pod \"a0737da3-829f-4802-95aa-f42ae8546b75\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.792458 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ssh-key\") pod \"a0737da3-829f-4802-95aa-f42ae8546b75\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.792502 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-inventory\") pod \"a0737da3-829f-4802-95aa-f42ae8546b75\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.792650 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-combined-ca-bundle\") pod \"a0737da3-829f-4802-95aa-f42ae8546b75\" (UID: \"a0737da3-829f-4802-95aa-f42ae8546b75\") " Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.798230 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ceph" (OuterVolumeSpecName: "ceph") pod "a0737da3-829f-4802-95aa-f42ae8546b75" (UID: "a0737da3-829f-4802-95aa-f42ae8546b75"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.798713 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0737da3-829f-4802-95aa-f42ae8546b75-kube-api-access-b7smx" (OuterVolumeSpecName: "kube-api-access-b7smx") pod "a0737da3-829f-4802-95aa-f42ae8546b75" (UID: "a0737da3-829f-4802-95aa-f42ae8546b75"). InnerVolumeSpecName "kube-api-access-b7smx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.800830 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "a0737da3-829f-4802-95aa-f42ae8546b75" (UID: "a0737da3-829f-4802-95aa-f42ae8546b75"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.832839 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a0737da3-829f-4802-95aa-f42ae8546b75" (UID: "a0737da3-829f-4802-95aa-f42ae8546b75"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.834401 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "a0737da3-829f-4802-95aa-f42ae8546b75" (UID: "a0737da3-829f-4802-95aa-f42ae8546b75"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.835917 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-inventory" (OuterVolumeSpecName: "inventory") pod "a0737da3-829f-4802-95aa-f42ae8546b75" (UID: "a0737da3-829f-4802-95aa-f42ae8546b75"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.894935 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7smx\" (UniqueName: \"kubernetes.io/projected/a0737da3-829f-4802-95aa-f42ae8546b75-kube-api-access-b7smx\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.894968 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.894978 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.894990 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.895000 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:16 crc kubenswrapper[4900]: I1202 16:08:16.895010 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0737da3-829f-4802-95aa-f42ae8546b75-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.088677 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbx5f" event={"ID":"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c","Type":"ContainerStarted","Data":"b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b"} Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.092748 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.092768 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-28rxz" event={"ID":"a0737da3-829f-4802-95aa-f42ae8546b75","Type":"ContainerDied","Data":"14c1317a3a02611c612a0ea40e7a84170540e9e84bd320e15c92a46d9f784b10"} Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.092808 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c1317a3a02611c612a0ea40e7a84170540e9e84bd320e15c92a46d9f784b10" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.116213 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbx5f" podStartSLOduration=3.444356918 podStartE2EDuration="6.11619306s" podCreationTimestamp="2025-12-02 16:08:11 +0000 UTC" firstStartedPulling="2025-12-02 16:08:14.035894824 +0000 UTC m=+8739.451708675" lastFinishedPulling="2025-12-02 16:08:16.707730966 +0000 UTC m=+8742.123544817" observedRunningTime="2025-12-02 16:08:17.109464659 +0000 UTC m=+8742.525278510" watchObservedRunningTime="2025-12-02 16:08:17.11619306 +0000 UTC m=+8742.532006911" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.166358 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-vhckw"] Dec 02 16:08:17 crc kubenswrapper[4900]: E1202 16:08:17.166955 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0737da3-829f-4802-95aa-f42ae8546b75" containerName="neutron-sriov-openstack-openstack-cell1" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.166973 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0737da3-829f-4802-95aa-f42ae8546b75" containerName="neutron-sriov-openstack-openstack-cell1" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.167235 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0737da3-829f-4802-95aa-f42ae8546b75" containerName="neutron-sriov-openstack-openstack-cell1" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.168184 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.170900 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.171264 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.171384 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.171440 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.175455 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.201674 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-vhckw"] Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.305969 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7k5p\" (UniqueName: \"kubernetes.io/projected/22259223-6c4d-4df8-be51-5c59f0675b67-kube-api-access-r7k5p\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.306019 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.306710 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.306734 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.306804 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.306825 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.408921 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7k5p\" (UniqueName: \"kubernetes.io/projected/22259223-6c4d-4df8-be51-5c59f0675b67-kube-api-access-r7k5p\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.408968 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.409053 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.409077 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.409146 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.409168 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.413342 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.413535 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.413750 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.413766 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.415285 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.427167 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7k5p\" (UniqueName: \"kubernetes.io/projected/22259223-6c4d-4df8-be51-5c59f0675b67-kube-api-access-r7k5p\") pod \"neutron-dhcp-openstack-openstack-cell1-vhckw\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:17 crc kubenswrapper[4900]: I1202 16:08:17.491926 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:08:18 crc kubenswrapper[4900]: I1202 16:08:18.128054 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-vhckw"] Dec 02 16:08:18 crc kubenswrapper[4900]: W1202 16:08:18.136268 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22259223_6c4d_4df8_be51_5c59f0675b67.slice/crio-b171423422d5c1538cb39e47f9d2a93b7a379737b36d3a43fa7fe36bb1792084 WatchSource:0}: Error finding container b171423422d5c1538cb39e47f9d2a93b7a379737b36d3a43fa7fe36bb1792084: Status 404 returned error can't find the container with id b171423422d5c1538cb39e47f9d2a93b7a379737b36d3a43fa7fe36bb1792084 Dec 02 16:08:19 crc kubenswrapper[4900]: I1202 16:08:19.130099 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" event={"ID":"22259223-6c4d-4df8-be51-5c59f0675b67","Type":"ContainerStarted","Data":"0a7af80fb8b362f793d40eb7ed75eae121d18973c54af78f054b3668a0ccc796"} Dec 02 16:08:19 crc kubenswrapper[4900]: I1202 16:08:19.130764 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" event={"ID":"22259223-6c4d-4df8-be51-5c59f0675b67","Type":"ContainerStarted","Data":"b171423422d5c1538cb39e47f9d2a93b7a379737b36d3a43fa7fe36bb1792084"} Dec 02 16:08:19 crc kubenswrapper[4900]: I1202 16:08:19.156399 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" podStartSLOduration=1.553373235 podStartE2EDuration="2.156380812s" podCreationTimestamp="2025-12-02 16:08:17 +0000 UTC" firstStartedPulling="2025-12-02 16:08:18.146782073 +0000 UTC m=+8743.562595924" lastFinishedPulling="2025-12-02 16:08:18.74978965 +0000 UTC m=+8744.165603501" observedRunningTime="2025-12-02 16:08:19.149829096 +0000 UTC m=+8744.565642977" watchObservedRunningTime="2025-12-02 16:08:19.156380812 +0000 UTC m=+8744.572194663" Dec 02 16:08:22 crc kubenswrapper[4900]: I1202 16:08:22.293804 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:22 crc kubenswrapper[4900]: I1202 16:08:22.294395 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:22 crc kubenswrapper[4900]: I1202 16:08:22.372533 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:23 crc kubenswrapper[4900]: I1202 16:08:23.260789 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:23 crc kubenswrapper[4900]: I1202 16:08:23.320441 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbx5f"] Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.207199 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbx5f" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="registry-server" containerID="cri-o://b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b" gracePeriod=2 Dec 02 16:08:25 crc kubenswrapper[4900]: E1202 16:08:25.441505 4900 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8af7e7ab_3c86_4ceb_8837_35d2a81cb34c.slice/crio-b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8af7e7ab_3c86_4ceb_8837_35d2a81cb34c.slice/crio-conmon-b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b.scope\": RecentStats: unable to find data in memory cache]" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.725208 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.837251 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-utilities\") pod \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.837299 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-catalog-content\") pod \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.837636 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqqj\" (UniqueName: \"kubernetes.io/projected/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-kube-api-access-jdqqj\") pod \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\" (UID: \"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c\") " Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.838949 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-utilities" (OuterVolumeSpecName: "utilities") pod "8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" (UID: "8af7e7ab-3c86-4ceb-8837-35d2a81cb34c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.846761 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-kube-api-access-jdqqj" (OuterVolumeSpecName: "kube-api-access-jdqqj") pod "8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" (UID: "8af7e7ab-3c86-4ceb-8837-35d2a81cb34c"). InnerVolumeSpecName "kube-api-access-jdqqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.905987 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" (UID: "8af7e7ab-3c86-4ceb-8837-35d2a81cb34c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.941128 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqqj\" (UniqueName: \"kubernetes.io/projected/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-kube-api-access-jdqqj\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.941184 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:25 crc kubenswrapper[4900]: I1202 16:08:25.941198 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.221025 4900 generic.go:334] "Generic (PLEG): container finished" podID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerID="b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b" exitCode=0 Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.221192 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbx5f" event={"ID":"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c","Type":"ContainerDied","Data":"b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b"} Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.221478 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbx5f" event={"ID":"8af7e7ab-3c86-4ceb-8837-35d2a81cb34c","Type":"ContainerDied","Data":"8d0c2cebf26df724cdaca98d481e1ce02ad0584d6cc9f03dac100f679e2c97c1"} Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.221510 4900 scope.go:117] "RemoveContainer" containerID="b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.221290 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbx5f" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.264739 4900 scope.go:117] "RemoveContainer" containerID="ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.273423 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbx5f"] Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.292927 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbx5f"] Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.306120 4900 scope.go:117] "RemoveContainer" containerID="12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.350607 4900 scope.go:117] "RemoveContainer" containerID="b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b" Dec 02 16:08:26 crc kubenswrapper[4900]: E1202 16:08:26.351302 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b\": container with ID starting with b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b not found: ID does not exist" containerID="b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.351362 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b"} err="failed to get container status \"b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b\": rpc error: code = NotFound desc = could not find container \"b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b\": container with ID starting with b808a3f8214ae76d38ed03064ad6c32ef1b4e511d97444905acc9e944e184b2b not found: ID does not exist" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.351402 4900 scope.go:117] "RemoveContainer" containerID="ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a" Dec 02 16:08:26 crc kubenswrapper[4900]: E1202 16:08:26.352021 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a\": container with ID starting with ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a not found: ID does not exist" containerID="ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.352084 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a"} err="failed to get container status \"ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a\": rpc error: code = NotFound desc = could not find container \"ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a\": container with ID starting with ca015d0471f24ea0c66738f04346f76b57b99bd724d6e5f33986cb73f1eee24a not found: ID does not exist" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.352126 4900 scope.go:117] "RemoveContainer" containerID="12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6" Dec 02 16:08:26 crc kubenswrapper[4900]: E1202 16:08:26.352534 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6\": container with ID starting with 12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6 not found: ID does not exist" containerID="12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.352576 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6"} err="failed to get container status \"12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6\": rpc error: code = NotFound desc = could not find container \"12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6\": container with ID starting with 12d8b956e37773cd3ad1677c055183adbc5335aa3f101a5b81ea0cc09fede3c6 not found: ID does not exist" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.911052 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:08:26 crc kubenswrapper[4900]: E1202 16:08:26.911912 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:08:26 crc kubenswrapper[4900]: I1202 16:08:26.935586 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" path="/var/lib/kubelet/pods/8af7e7ab-3c86-4ceb-8837-35d2a81cb34c/volumes" Dec 02 16:08:41 crc kubenswrapper[4900]: I1202 16:08:41.911294 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:08:41 crc kubenswrapper[4900]: E1202 16:08:41.912327 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:08:54 crc kubenswrapper[4900]: I1202 16:08:54.922205 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:08:54 crc kubenswrapper[4900]: E1202 16:08:54.923207 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:09:06 crc kubenswrapper[4900]: I1202 16:09:06.910888 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:09:06 crc kubenswrapper[4900]: E1202 16:09:06.911755 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.556896 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjcp5"] Dec 02 16:09:18 crc kubenswrapper[4900]: E1202 16:09:18.558521 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="extract-content" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.558549 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="extract-content" Dec 02 16:09:18 crc kubenswrapper[4900]: E1202 16:09:18.558619 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="extract-utilities" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.558633 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="extract-utilities" Dec 02 16:09:18 crc kubenswrapper[4900]: E1202 16:09:18.558690 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="registry-server" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.558704 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="registry-server" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.559097 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af7e7ab-3c86-4ceb-8837-35d2a81cb34c" containerName="registry-server" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.562272 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.571706 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjcp5"] Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.662375 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs7w\" (UniqueName: \"kubernetes.io/projected/3cac3358-d341-408f-8ce0-f154d0c3fe24-kube-api-access-vvs7w\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.662552 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-utilities\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.662807 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-catalog-content\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.765569 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs7w\" (UniqueName: \"kubernetes.io/projected/3cac3358-d341-408f-8ce0-f154d0c3fe24-kube-api-access-vvs7w\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.765628 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-utilities\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.765709 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-catalog-content\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.766175 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-catalog-content\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:18 crc kubenswrapper[4900]: I1202 16:09:18.766394 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-utilities\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:19 crc kubenswrapper[4900]: I1202 16:09:19.066512 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs7w\" (UniqueName: \"kubernetes.io/projected/3cac3358-d341-408f-8ce0-f154d0c3fe24-kube-api-access-vvs7w\") pod \"certified-operators-jjcp5\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:19 crc kubenswrapper[4900]: I1202 16:09:19.198308 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:19 crc kubenswrapper[4900]: I1202 16:09:19.687248 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjcp5"] Dec 02 16:09:19 crc kubenswrapper[4900]: I1202 16:09:19.859033 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjcp5" event={"ID":"3cac3358-d341-408f-8ce0-f154d0c3fe24","Type":"ContainerStarted","Data":"6e9c35ef24f1cd165842302f32bb1e58b7d78bdd3b9d879a84ec4885238a326e"} Dec 02 16:09:20 crc kubenswrapper[4900]: I1202 16:09:20.868540 4900 generic.go:334] "Generic (PLEG): container finished" podID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerID="b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a" exitCode=0 Dec 02 16:09:20 crc kubenswrapper[4900]: I1202 16:09:20.868607 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjcp5" event={"ID":"3cac3358-d341-408f-8ce0-f154d0c3fe24","Type":"ContainerDied","Data":"b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a"} Dec 02 16:09:20 crc kubenswrapper[4900]: I1202 16:09:20.910040 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:09:21 crc kubenswrapper[4900]: I1202 16:09:21.881375 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"5240b0b5c44f58e14a690f455d362fc37b274a6424c43bb66b5a66a5725e9f7f"} Dec 02 16:09:22 crc kubenswrapper[4900]: I1202 16:09:22.893489 4900 generic.go:334] "Generic (PLEG): container finished" podID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerID="23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee" exitCode=0 Dec 02 16:09:22 crc kubenswrapper[4900]: I1202 16:09:22.893587 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjcp5" event={"ID":"3cac3358-d341-408f-8ce0-f154d0c3fe24","Type":"ContainerDied","Data":"23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee"} Dec 02 16:09:24 crc kubenswrapper[4900]: I1202 16:09:24.931309 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjcp5" event={"ID":"3cac3358-d341-408f-8ce0-f154d0c3fe24","Type":"ContainerStarted","Data":"3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525"} Dec 02 16:09:24 crc kubenswrapper[4900]: I1202 16:09:24.957693 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjcp5" podStartSLOduration=4.011607031 podStartE2EDuration="6.957380005s" podCreationTimestamp="2025-12-02 16:09:18 +0000 UTC" firstStartedPulling="2025-12-02 16:09:20.871249727 +0000 UTC m=+8806.287063578" lastFinishedPulling="2025-12-02 16:09:23.817022681 +0000 UTC m=+8809.232836552" observedRunningTime="2025-12-02 16:09:24.948269887 +0000 UTC m=+8810.364083748" watchObservedRunningTime="2025-12-02 16:09:24.957380005 +0000 UTC m=+8810.373193876" Dec 02 16:09:29 crc kubenswrapper[4900]: I1202 16:09:29.199317 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:29 crc kubenswrapper[4900]: I1202 16:09:29.200019 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:29 crc kubenswrapper[4900]: I1202 16:09:29.289368 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:30 crc kubenswrapper[4900]: I1202 16:09:30.094233 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:30 crc kubenswrapper[4900]: I1202 16:09:30.174481 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjcp5"] Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.031230 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jjcp5" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="registry-server" containerID="cri-o://3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525" gracePeriod=2 Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.698910 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.837193 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvs7w\" (UniqueName: \"kubernetes.io/projected/3cac3358-d341-408f-8ce0-f154d0c3fe24-kube-api-access-vvs7w\") pod \"3cac3358-d341-408f-8ce0-f154d0c3fe24\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.837415 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-utilities\") pod \"3cac3358-d341-408f-8ce0-f154d0c3fe24\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.837566 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-catalog-content\") pod \"3cac3358-d341-408f-8ce0-f154d0c3fe24\" (UID: \"3cac3358-d341-408f-8ce0-f154d0c3fe24\") " Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.838570 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-utilities" (OuterVolumeSpecName: "utilities") pod "3cac3358-d341-408f-8ce0-f154d0c3fe24" (UID: "3cac3358-d341-408f-8ce0-f154d0c3fe24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.849574 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cac3358-d341-408f-8ce0-f154d0c3fe24-kube-api-access-vvs7w" (OuterVolumeSpecName: "kube-api-access-vvs7w") pod "3cac3358-d341-408f-8ce0-f154d0c3fe24" (UID: "3cac3358-d341-408f-8ce0-f154d0c3fe24"). InnerVolumeSpecName "kube-api-access-vvs7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.899951 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cac3358-d341-408f-8ce0-f154d0c3fe24" (UID: "3cac3358-d341-408f-8ce0-f154d0c3fe24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.940632 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.940692 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cac3358-d341-408f-8ce0-f154d0c3fe24-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:09:32 crc kubenswrapper[4900]: I1202 16:09:32.940708 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvs7w\" (UniqueName: \"kubernetes.io/projected/3cac3358-d341-408f-8ce0-f154d0c3fe24-kube-api-access-vvs7w\") on node \"crc\" DevicePath \"\"" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.052927 4900 generic.go:334] "Generic (PLEG): container finished" podID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerID="3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525" exitCode=0 Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.053284 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjcp5" event={"ID":"3cac3358-d341-408f-8ce0-f154d0c3fe24","Type":"ContainerDied","Data":"3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525"} Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.053319 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjcp5" event={"ID":"3cac3358-d341-408f-8ce0-f154d0c3fe24","Type":"ContainerDied","Data":"6e9c35ef24f1cd165842302f32bb1e58b7d78bdd3b9d879a84ec4885238a326e"} Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.053342 4900 scope.go:117] "RemoveContainer" containerID="3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.053504 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjcp5" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.094913 4900 scope.go:117] "RemoveContainer" containerID="23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.108144 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjcp5"] Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.125321 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jjcp5"] Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.291693 4900 scope.go:117] "RemoveContainer" containerID="b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.333851 4900 scope.go:117] "RemoveContainer" containerID="3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525" Dec 02 16:09:33 crc kubenswrapper[4900]: E1202 16:09:33.338157 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525\": container with ID starting with 3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525 not found: ID does not exist" containerID="3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.338199 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525"} err="failed to get container status \"3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525\": rpc error: code = NotFound desc = could not find container \"3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525\": container with ID starting with 3d30641c979f1ae8974723658b9680998b810023fead65b365e90073a704e525 not found: ID does not exist" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.338225 4900 scope.go:117] "RemoveContainer" containerID="23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee" Dec 02 16:09:33 crc kubenswrapper[4900]: E1202 16:09:33.338527 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee\": container with ID starting with 23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee not found: ID does not exist" containerID="23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.338585 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee"} err="failed to get container status \"23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee\": rpc error: code = NotFound desc = could not find container \"23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee\": container with ID starting with 23cd052f7a1f5b0d80ffc3a5ad1878d8e2cd944325256dc3c94cbb8fcca7c1ee not found: ID does not exist" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.338666 4900 scope.go:117] "RemoveContainer" containerID="b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a" Dec 02 16:09:33 crc kubenswrapper[4900]: E1202 16:09:33.342033 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a\": container with ID starting with b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a not found: ID does not exist" containerID="b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a" Dec 02 16:09:33 crc kubenswrapper[4900]: I1202 16:09:33.342071 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a"} err="failed to get container status \"b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a\": rpc error: code = NotFound desc = could not find container \"b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a\": container with ID starting with b284d1e2952325b71b41f031f1bb9623ed5303f96558cabc060b78173ba5ac1a not found: ID does not exist" Dec 02 16:09:34 crc kubenswrapper[4900]: I1202 16:09:34.924418 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" path="/var/lib/kubelet/pods/3cac3358-d341-408f-8ce0-f154d0c3fe24/volumes" Dec 02 16:11:36 crc kubenswrapper[4900]: I1202 16:11:36.704995 4900 generic.go:334] "Generic (PLEG): container finished" podID="22259223-6c4d-4df8-be51-5c59f0675b67" containerID="0a7af80fb8b362f793d40eb7ed75eae121d18973c54af78f054b3668a0ccc796" exitCode=0 Dec 02 16:11:36 crc kubenswrapper[4900]: I1202 16:11:36.705109 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" event={"ID":"22259223-6c4d-4df8-be51-5c59f0675b67","Type":"ContainerDied","Data":"0a7af80fb8b362f793d40eb7ed75eae121d18973c54af78f054b3668a0ccc796"} Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.174401 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.308981 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-combined-ca-bundle\") pod \"22259223-6c4d-4df8-be51-5c59f0675b67\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.309029 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-inventory\") pod \"22259223-6c4d-4df8-be51-5c59f0675b67\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.309061 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ceph\") pod \"22259223-6c4d-4df8-be51-5c59f0675b67\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.309236 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7k5p\" (UniqueName: \"kubernetes.io/projected/22259223-6c4d-4df8-be51-5c59f0675b67-kube-api-access-r7k5p\") pod \"22259223-6c4d-4df8-be51-5c59f0675b67\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.309284 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ssh-key\") pod \"22259223-6c4d-4df8-be51-5c59f0675b67\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.309319 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-agent-neutron-config-0\") pod \"22259223-6c4d-4df8-be51-5c59f0675b67\" (UID: \"22259223-6c4d-4df8-be51-5c59f0675b67\") " Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.316955 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "22259223-6c4d-4df8-be51-5c59f0675b67" (UID: "22259223-6c4d-4df8-be51-5c59f0675b67"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.317942 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22259223-6c4d-4df8-be51-5c59f0675b67-kube-api-access-r7k5p" (OuterVolumeSpecName: "kube-api-access-r7k5p") pod "22259223-6c4d-4df8-be51-5c59f0675b67" (UID: "22259223-6c4d-4df8-be51-5c59f0675b67"). InnerVolumeSpecName "kube-api-access-r7k5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.320821 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ceph" (OuterVolumeSpecName: "ceph") pod "22259223-6c4d-4df8-be51-5c59f0675b67" (UID: "22259223-6c4d-4df8-be51-5c59f0675b67"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.349554 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-inventory" (OuterVolumeSpecName: "inventory") pod "22259223-6c4d-4df8-be51-5c59f0675b67" (UID: "22259223-6c4d-4df8-be51-5c59f0675b67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.352472 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "22259223-6c4d-4df8-be51-5c59f0675b67" (UID: "22259223-6c4d-4df8-be51-5c59f0675b67"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.362569 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "22259223-6c4d-4df8-be51-5c59f0675b67" (UID: "22259223-6c4d-4df8-be51-5c59f0675b67"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.411763 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7k5p\" (UniqueName: \"kubernetes.io/projected/22259223-6c4d-4df8-be51-5c59f0675b67-kube-api-access-r7k5p\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.411807 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.411821 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.411835 4900 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.411865 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.411878 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/22259223-6c4d-4df8-be51-5c59f0675b67-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.739481 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" event={"ID":"22259223-6c4d-4df8-be51-5c59f0675b67","Type":"ContainerDied","Data":"b171423422d5c1538cb39e47f9d2a93b7a379737b36d3a43fa7fe36bb1792084"} Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.739538 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b171423422d5c1538cb39e47f9d2a93b7a379737b36d3a43fa7fe36bb1792084" Dec 02 16:11:38 crc kubenswrapper[4900]: I1202 16:11:38.739575 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-vhckw" Dec 02 16:11:45 crc kubenswrapper[4900]: I1202 16:11:45.116923 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:11:45 crc kubenswrapper[4900]: I1202 16:11:45.117835 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:11:53 crc kubenswrapper[4900]: I1202 16:11:53.592108 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 16:11:53 crc kubenswrapper[4900]: I1202 16:11:53.592723 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="b504d673-a6a9-401b-bc03-47a24ac82901" containerName="nova-cell0-conductor-conductor" containerID="cri-o://5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" gracePeriod=30 Dec 02 16:11:53 crc kubenswrapper[4900]: I1202 16:11:53.635452 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 16:11:53 crc kubenswrapper[4900]: I1202 16:11:53.635727 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="67fefbb5-0c22-4a87-bd43-80325328c3e2" containerName="nova-cell1-conductor-conductor" containerID="cri-o://56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b" gracePeriod=30 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.350849 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.351954 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d7c11796-f4ef-4637-8541-5b27d488f6ab" containerName="nova-scheduler-scheduler" containerID="cri-o://5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" gracePeriod=30 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.403738 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.403963 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-log" containerID="cri-o://e82da1ed9487f2b43c135a4c9c5aacd7a7e6578d47a8c91ecaac3cada80d60e3" gracePeriod=30 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.404102 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-metadata" containerID="cri-o://647b9ca80ba02788c97c620fa0f31986be3035d35e9afccc42a7975903cce145" gracePeriod=30 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.460153 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.460447 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-log" containerID="cri-o://98716c9c32801896c448a0c761da06478f08646b7cb6d5f2eff8e4deabceb5b7" gracePeriod=30 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.460933 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-api" containerID="cri-o://23f26d6d7f0fab6b86022862139e84f21ca42cf481e2b4fc0c241b472792b2a4" gracePeriod=30 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499045 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns"] Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.499457 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="extract-content" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499473 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="extract-content" Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.499497 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="registry-server" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499504 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="registry-server" Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.499521 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22259223-6c4d-4df8-be51-5c59f0675b67" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499527 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="22259223-6c4d-4df8-be51-5c59f0675b67" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.499544 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="extract-utilities" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499550 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="extract-utilities" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499762 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="22259223-6c4d-4df8-be51-5c59f0675b67" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.499774 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cac3358-d341-408f-8ce0-f154d0c3fe24" containerName="registry-server" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.500487 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.507068 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.507314 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.507499 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.507627 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.507831 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jzz4r" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.507946 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.508044 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.562247 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns"] Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.597876 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbn6n\" (UniqueName: \"kubernetes.io/projected/a40ced8a-8021-4c6e-8381-4e587bdb7f04-kube-api-access-sbn6n\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.597970 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598016 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598075 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598109 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598167 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598188 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598208 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598239 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598288 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.598309 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.701858 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.702724 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.702756 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703025 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703070 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703132 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703152 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703221 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbn6n\" (UniqueName: \"kubernetes.io/projected/a40ced8a-8021-4c6e-8381-4e587bdb7f04-kube-api-access-sbn6n\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703250 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703325 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.703364 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.705034 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.707162 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.707955 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.709737 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.710214 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.712213 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.718091 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.718181 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.718605 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.719074 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.729141 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.731329 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbn6n\" (UniqueName: \"kubernetes.io/projected/a40ced8a-8021-4c6e-8381-4e587bdb7f04-kube-api-access-sbn6n\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.736960 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.738196 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 16:11:54 crc kubenswrapper[4900]: E1202 16:11:54.738231 4900 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d7c11796-f4ef-4637-8541-5b27d488f6ab" containerName="nova-scheduler-scheduler" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.832124 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.905593 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.905778 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-combined-ca-bundle\") pod \"67fefbb5-0c22-4a87-bd43-80325328c3e2\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.905869 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-config-data\") pod \"67fefbb5-0c22-4a87-bd43-80325328c3e2\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.906459 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p42zb\" (UniqueName: \"kubernetes.io/projected/67fefbb5-0c22-4a87-bd43-80325328c3e2-kube-api-access-p42zb\") pod \"67fefbb5-0c22-4a87-bd43-80325328c3e2\" (UID: \"67fefbb5-0c22-4a87-bd43-80325328c3e2\") " Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.918506 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fefbb5-0c22-4a87-bd43-80325328c3e2-kube-api-access-p42zb" (OuterVolumeSpecName: "kube-api-access-p42zb") pod "67fefbb5-0c22-4a87-bd43-80325328c3e2" (UID: "67fefbb5-0c22-4a87-bd43-80325328c3e2"). InnerVolumeSpecName "kube-api-access-p42zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.944913 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-config-data" (OuterVolumeSpecName: "config-data") pod "67fefbb5-0c22-4a87-bd43-80325328c3e2" (UID: "67fefbb5-0c22-4a87-bd43-80325328c3e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.946303 4900 generic.go:334] "Generic (PLEG): container finished" podID="67fefbb5-0c22-4a87-bd43-80325328c3e2" containerID="56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b" exitCode=0 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.946377 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"67fefbb5-0c22-4a87-bd43-80325328c3e2","Type":"ContainerDied","Data":"56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b"} Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.946401 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"67fefbb5-0c22-4a87-bd43-80325328c3e2","Type":"ContainerDied","Data":"5a975c05af751fb221e510fe57e3f2dd4a220bed97ab40cb583a9778dc7bee7b"} Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.946420 4900 scope.go:117] "RemoveContainer" containerID="56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.946577 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.951124 4900 generic.go:334] "Generic (PLEG): container finished" podID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerID="98716c9c32801896c448a0c761da06478f08646b7cb6d5f2eff8e4deabceb5b7" exitCode=143 Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.951228 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b0f74c1-cc44-44ac-a262-eea482b36ca8","Type":"ContainerDied","Data":"98716c9c32801896c448a0c761da06478f08646b7cb6d5f2eff8e4deabceb5b7"} Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.963267 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67fefbb5-0c22-4a87-bd43-80325328c3e2" (UID: "67fefbb5-0c22-4a87-bd43-80325328c3e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.970867 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff591ec8-ae35-4a05-b7e1-99b63b7125d7","Type":"ContainerDied","Data":"e82da1ed9487f2b43c135a4c9c5aacd7a7e6578d47a8c91ecaac3cada80d60e3"} Dec 02 16:11:54 crc kubenswrapper[4900]: I1202 16:11:54.971660 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerID="e82da1ed9487f2b43c135a4c9c5aacd7a7e6578d47a8c91ecaac3cada80d60e3" exitCode=143 Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.011222 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.011258 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fefbb5-0c22-4a87-bd43-80325328c3e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.011267 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p42zb\" (UniqueName: \"kubernetes.io/projected/67fefbb5-0c22-4a87-bd43-80325328c3e2-kube-api-access-p42zb\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.284580 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.295730 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.306753 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 16:11:55 crc kubenswrapper[4900]: E1202 16:11:55.307265 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fefbb5-0c22-4a87-bd43-80325328c3e2" containerName="nova-cell1-conductor-conductor" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.307291 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fefbb5-0c22-4a87-bd43-80325328c3e2" containerName="nova-cell1-conductor-conductor" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.307615 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fefbb5-0c22-4a87-bd43-80325328c3e2" containerName="nova-cell1-conductor-conductor" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.308635 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.313017 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.330714 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.424069 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54127e6f-a1f3-4821-a5c3-2400546b2463-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.424190 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4p2\" (UniqueName: \"kubernetes.io/projected/54127e6f-a1f3-4821-a5c3-2400546b2463-kube-api-access-lf4p2\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.424229 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54127e6f-a1f3-4821-a5c3-2400546b2463-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.488522 4900 scope.go:117] "RemoveContainer" containerID="56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b" Dec 02 16:11:55 crc kubenswrapper[4900]: E1202 16:11:55.490499 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b\": container with ID starting with 56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b not found: ID does not exist" containerID="56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.490541 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b"} err="failed to get container status \"56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b\": rpc error: code = NotFound desc = could not find container \"56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b\": container with ID starting with 56e3189d809daf4a00d96aefacdd457c2067d71aabe799ceb508dbc37d13410b not found: ID does not exist" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.526515 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf4p2\" (UniqueName: \"kubernetes.io/projected/54127e6f-a1f3-4821-a5c3-2400546b2463-kube-api-access-lf4p2\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.526567 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54127e6f-a1f3-4821-a5c3-2400546b2463-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.526713 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54127e6f-a1f3-4821-a5c3-2400546b2463-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.534655 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54127e6f-a1f3-4821-a5c3-2400546b2463-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.536629 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54127e6f-a1f3-4821-a5c3-2400546b2463-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.550102 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf4p2\" (UniqueName: \"kubernetes.io/projected/54127e6f-a1f3-4821-a5c3-2400546b2463-kube-api-access-lf4p2\") pod \"nova-cell1-conductor-0\" (UID: \"54127e6f-a1f3-4821-a5c3-2400546b2463\") " pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.574933 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns"] Dec 02 16:11:55 crc kubenswrapper[4900]: W1202 16:11:55.590541 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40ced8a_8021_4c6e_8381_4e587bdb7f04.slice/crio-1e9853f19dc919be2bd278859bf4c03d621b42a1216f51663ef0fd8f10717303 WatchSource:0}: Error finding container 1e9853f19dc919be2bd278859bf4c03d621b42a1216f51663ef0fd8f10717303: Status 404 returned error can't find the container with id 1e9853f19dc919be2bd278859bf4c03d621b42a1216f51663ef0fd8f10717303 Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.758207 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:55 crc kubenswrapper[4900]: E1202 16:11:55.765773 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968 is running failed: container process not found" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 16:11:55 crc kubenswrapper[4900]: E1202 16:11:55.771752 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968 is running failed: container process not found" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 16:11:55 crc kubenswrapper[4900]: E1202 16:11:55.776728 4900 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968 is running failed: container process not found" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 02 16:11:55 crc kubenswrapper[4900]: E1202 16:11:55.776783 4900 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="b504d673-a6a9-401b-bc03-47a24ac82901" containerName="nova-cell0-conductor-conductor" Dec 02 16:11:55 crc kubenswrapper[4900]: I1202 16:11:55.916215 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.017893 4900 generic.go:334] "Generic (PLEG): container finished" podID="b504d673-a6a9-401b-bc03-47a24ac82901" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" exitCode=0 Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.018157 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.018058 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b504d673-a6a9-401b-bc03-47a24ac82901","Type":"ContainerDied","Data":"5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968"} Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.018276 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b504d673-a6a9-401b-bc03-47a24ac82901","Type":"ContainerDied","Data":"bc1cc3a72f0d4b5503b92137f328bebdb55f597a1212fe47b99b39e452c5a8fb"} Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.018299 4900 scope.go:117] "RemoveContainer" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.027997 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" event={"ID":"a40ced8a-8021-4c6e-8381-4e587bdb7f04","Type":"ContainerStarted","Data":"1e9853f19dc919be2bd278859bf4c03d621b42a1216f51663ef0fd8f10717303"} Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.041980 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-combined-ca-bundle\") pod \"b504d673-a6a9-401b-bc03-47a24ac82901\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.042029 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95rc\" (UniqueName: \"kubernetes.io/projected/b504d673-a6a9-401b-bc03-47a24ac82901-kube-api-access-j95rc\") pod \"b504d673-a6a9-401b-bc03-47a24ac82901\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.042220 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-config-data\") pod \"b504d673-a6a9-401b-bc03-47a24ac82901\" (UID: \"b504d673-a6a9-401b-bc03-47a24ac82901\") " Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.063535 4900 scope.go:117] "RemoveContainer" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.063749 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b504d673-a6a9-401b-bc03-47a24ac82901-kube-api-access-j95rc" (OuterVolumeSpecName: "kube-api-access-j95rc") pod "b504d673-a6a9-401b-bc03-47a24ac82901" (UID: "b504d673-a6a9-401b-bc03-47a24ac82901"). InnerVolumeSpecName "kube-api-access-j95rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:11:56 crc kubenswrapper[4900]: E1202 16:11:56.064794 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968\": container with ID starting with 5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968 not found: ID does not exist" containerID="5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.064830 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968"} err="failed to get container status \"5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968\": rpc error: code = NotFound desc = could not find container \"5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968\": container with ID starting with 5017fdc8b5890ac3b40e770b08f1ac693f7ceac807d7b5f401523e5fef10d968 not found: ID does not exist" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.076758 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b504d673-a6a9-401b-bc03-47a24ac82901" (UID: "b504d673-a6a9-401b-bc03-47a24ac82901"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.092851 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-config-data" (OuterVolumeSpecName: "config-data") pod "b504d673-a6a9-401b-bc03-47a24ac82901" (UID: "b504d673-a6a9-401b-bc03-47a24ac82901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.144466 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.144496 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b504d673-a6a9-401b-bc03-47a24ac82901-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.144510 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95rc\" (UniqueName: \"kubernetes.io/projected/b504d673-a6a9-401b-bc03-47a24ac82901-kube-api-access-j95rc\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.412121 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.572938 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.584116 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.594064 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 16:11:56 crc kubenswrapper[4900]: E1202 16:11:56.594698 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b504d673-a6a9-401b-bc03-47a24ac82901" containerName="nova-cell0-conductor-conductor" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.594722 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b504d673-a6a9-401b-bc03-47a24ac82901" containerName="nova-cell0-conductor-conductor" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.595015 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b504d673-a6a9-401b-bc03-47a24ac82901" containerName="nova-cell0-conductor-conductor" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.596118 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.598686 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.603399 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.655897 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqthh\" (UniqueName: \"kubernetes.io/projected/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-kube-api-access-cqthh\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.655979 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.656141 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.758517 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.758943 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.759138 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqthh\" (UniqueName: \"kubernetes.io/projected/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-kube-api-access-cqthh\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.926875 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67fefbb5-0c22-4a87-bd43-80325328c3e2" path="/var/lib/kubelet/pods/67fefbb5-0c22-4a87-bd43-80325328c3e2/volumes" Dec 02 16:11:56 crc kubenswrapper[4900]: I1202 16:11:56.927982 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b504d673-a6a9-401b-bc03-47a24ac82901" path="/var/lib/kubelet/pods/b504d673-a6a9-401b-bc03-47a24ac82901/volumes" Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.044339 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"54127e6f-a1f3-4821-a5c3-2400546b2463","Type":"ContainerStarted","Data":"1ca96d762b52e3a1eedcc65ea6b836b292943cb63feb8c53447b275939d0eafe"} Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.257632 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqthh\" (UniqueName: \"kubernetes.io/projected/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-kube-api-access-cqthh\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.258274 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.265673 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76fdf10-cab6-4b9d-8484-82ed6b113f4f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f76fdf10-cab6-4b9d-8484-82ed6b113f4f\") " pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.543042 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.836068 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": read tcp 10.217.0.2:51670->10.217.1.82:8775: read: connection reset by peer" Dec 02 16:11:57 crc kubenswrapper[4900]: I1202 16:11:57.836118 4900 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": read tcp 10.217.0.2:51660->10.217.1.82:8775: read: connection reset by peer" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.030723 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 16:11:58 crc kubenswrapper[4900]: W1202 16:11:58.042971 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf76fdf10_cab6_4b9d_8484_82ed6b113f4f.slice/crio-43fc4839a0285ec50fe5ca32be3f8fc2af60dd49baa20b322c05fdb691e52f4b WatchSource:0}: Error finding container 43fc4839a0285ec50fe5ca32be3f8fc2af60dd49baa20b322c05fdb691e52f4b: Status 404 returned error can't find the container with id 43fc4839a0285ec50fe5ca32be3f8fc2af60dd49baa20b322c05fdb691e52f4b Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.070511 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerID="647b9ca80ba02788c97c620fa0f31986be3035d35e9afccc42a7975903cce145" exitCode=0 Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.070781 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff591ec8-ae35-4a05-b7e1-99b63b7125d7","Type":"ContainerDied","Data":"647b9ca80ba02788c97c620fa0f31986be3035d35e9afccc42a7975903cce145"} Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.075263 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"54127e6f-a1f3-4821-a5c3-2400546b2463","Type":"ContainerStarted","Data":"f3c243e8a1f648d8e7b3f6a83011f7ea637c9d11b9392c552db472ccd361e26c"} Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.075360 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.078183 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" event={"ID":"a40ced8a-8021-4c6e-8381-4e587bdb7f04","Type":"ContainerStarted","Data":"b22d5e9e681ac0a24a20f63825a6db4da9e3a414c2cf5fab0c2c1b96b2d3b0d3"} Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.079461 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f76fdf10-cab6-4b9d-8484-82ed6b113f4f","Type":"ContainerStarted","Data":"43fc4839a0285ec50fe5ca32be3f8fc2af60dd49baa20b322c05fdb691e52f4b"} Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.106261 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.106241967 podStartE2EDuration="3.106241967s" podCreationTimestamp="2025-12-02 16:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:11:58.096666666 +0000 UTC m=+8963.512480517" watchObservedRunningTime="2025-12-02 16:11:58.106241967 +0000 UTC m=+8963.522055808" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.115160 4900 generic.go:334] "Generic (PLEG): container finished" podID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerID="23f26d6d7f0fab6b86022862139e84f21ca42cf481e2b4fc0c241b472792b2a4" exitCode=0 Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.115216 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b0f74c1-cc44-44ac-a262-eea482b36ca8","Type":"ContainerDied","Data":"23f26d6d7f0fab6b86022862139e84f21ca42cf481e2b4fc0c241b472792b2a4"} Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.132440 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" podStartSLOduration=3.514023017 podStartE2EDuration="4.132417479s" podCreationTimestamp="2025-12-02 16:11:54 +0000 UTC" firstStartedPulling="2025-12-02 16:11:55.601486281 +0000 UTC m=+8961.017300132" lastFinishedPulling="2025-12-02 16:11:56.219880733 +0000 UTC m=+8961.635694594" observedRunningTime="2025-12-02 16:11:58.111725722 +0000 UTC m=+8963.527539583" watchObservedRunningTime="2025-12-02 16:11:58.132417479 +0000 UTC m=+8963.548231330" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.268742 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.407554 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b0f74c1-cc44-44ac-a262-eea482b36ca8-logs\") pod \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.408025 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-config-data\") pod \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.408156 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b0f74c1-cc44-44ac-a262-eea482b36ca8-logs" (OuterVolumeSpecName: "logs") pod "1b0f74c1-cc44-44ac-a262-eea482b36ca8" (UID: "1b0f74c1-cc44-44ac-a262-eea482b36ca8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.408195 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-combined-ca-bundle\") pod \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.408255 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqvjz\" (UniqueName: \"kubernetes.io/projected/1b0f74c1-cc44-44ac-a262-eea482b36ca8-kube-api-access-dqvjz\") pod \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\" (UID: \"1b0f74c1-cc44-44ac-a262-eea482b36ca8\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.410824 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b0f74c1-cc44-44ac-a262-eea482b36ca8-logs\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.421290 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0f74c1-cc44-44ac-a262-eea482b36ca8-kube-api-access-dqvjz" (OuterVolumeSpecName: "kube-api-access-dqvjz") pod "1b0f74c1-cc44-44ac-a262-eea482b36ca8" (UID: "1b0f74c1-cc44-44ac-a262-eea482b36ca8"). InnerVolumeSpecName "kube-api-access-dqvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.448976 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b0f74c1-cc44-44ac-a262-eea482b36ca8" (UID: "1b0f74c1-cc44-44ac-a262-eea482b36ca8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.485203 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-config-data" (OuterVolumeSpecName: "config-data") pod "1b0f74c1-cc44-44ac-a262-eea482b36ca8" (UID: "1b0f74c1-cc44-44ac-a262-eea482b36ca8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.525835 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqvjz\" (UniqueName: \"kubernetes.io/projected/1b0f74c1-cc44-44ac-a262-eea482b36ca8-kube-api-access-dqvjz\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.525936 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.525949 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0f74c1-cc44-44ac-a262-eea482b36ca8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.577297 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.628967 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-logs\") pod \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.629078 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-combined-ca-bundle\") pod \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.629129 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rs84\" (UniqueName: \"kubernetes.io/projected/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-kube-api-access-8rs84\") pod \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.629150 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-config-data\") pod \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\" (UID: \"ff591ec8-ae35-4a05-b7e1-99b63b7125d7\") " Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.632606 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-logs" (OuterVolumeSpecName: "logs") pod "ff591ec8-ae35-4a05-b7e1-99b63b7125d7" (UID: "ff591ec8-ae35-4a05-b7e1-99b63b7125d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.635450 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-kube-api-access-8rs84" (OuterVolumeSpecName: "kube-api-access-8rs84") pod "ff591ec8-ae35-4a05-b7e1-99b63b7125d7" (UID: "ff591ec8-ae35-4a05-b7e1-99b63b7125d7"). InnerVolumeSpecName "kube-api-access-8rs84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.669113 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff591ec8-ae35-4a05-b7e1-99b63b7125d7" (UID: "ff591ec8-ae35-4a05-b7e1-99b63b7125d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.711718 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-config-data" (OuterVolumeSpecName: "config-data") pod "ff591ec8-ae35-4a05-b7e1-99b63b7125d7" (UID: "ff591ec8-ae35-4a05-b7e1-99b63b7125d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.731758 4900 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-logs\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.731801 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.731815 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rs84\" (UniqueName: \"kubernetes.io/projected/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-kube-api-access-8rs84\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:58 crc kubenswrapper[4900]: I1202 16:11:58.731827 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff591ec8-ae35-4a05-b7e1-99b63b7125d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.005581 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.135598 4900 generic.go:334] "Generic (PLEG): container finished" podID="d7c11796-f4ef-4637-8541-5b27d488f6ab" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" exitCode=0 Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.135680 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c11796-f4ef-4637-8541-5b27d488f6ab","Type":"ContainerDied","Data":"5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8"} Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.135700 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.135718 4900 scope.go:117] "RemoveContainer" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.135707 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7c11796-f4ef-4637-8541-5b27d488f6ab","Type":"ContainerDied","Data":"8bc0c99c0d2dd77602fcbc46d2f33a50f97d5c84f739fea5c8996cd919fa601d"} Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.142040 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-config-data\") pod \"d7c11796-f4ef-4637-8541-5b27d488f6ab\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.142106 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-combined-ca-bundle\") pod \"d7c11796-f4ef-4637-8541-5b27d488f6ab\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.142143 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzkhw\" (UniqueName: \"kubernetes.io/projected/d7c11796-f4ef-4637-8541-5b27d488f6ab-kube-api-access-vzkhw\") pod \"d7c11796-f4ef-4637-8541-5b27d488f6ab\" (UID: \"d7c11796-f4ef-4637-8541-5b27d488f6ab\") " Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.148255 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f76fdf10-cab6-4b9d-8484-82ed6b113f4f","Type":"ContainerStarted","Data":"133ca187bf6701361b89395b9965778dbd075d4dbbdc2a6efc2cfcd4db23a1f7"} Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.150011 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.151914 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c11796-f4ef-4637-8541-5b27d488f6ab-kube-api-access-vzkhw" (OuterVolumeSpecName: "kube-api-access-vzkhw") pod "d7c11796-f4ef-4637-8541-5b27d488f6ab" (UID: "d7c11796-f4ef-4637-8541-5b27d488f6ab"). InnerVolumeSpecName "kube-api-access-vzkhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.183654 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b0f74c1-cc44-44ac-a262-eea482b36ca8","Type":"ContainerDied","Data":"1e1154e75af84e18117c51bbd478fc98fc6d2086b764f97ffee4718aa9b58456"} Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.184104 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.188449 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.188439563 podStartE2EDuration="3.188439563s" podCreationTimestamp="2025-12-02 16:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:11:59.170061343 +0000 UTC m=+8964.585875184" watchObservedRunningTime="2025-12-02 16:11:59.188439563 +0000 UTC m=+8964.604253404" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.203593 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-config-data" (OuterVolumeSpecName: "config-data") pod "d7c11796-f4ef-4637-8541-5b27d488f6ab" (UID: "d7c11796-f4ef-4637-8541-5b27d488f6ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.206814 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7c11796-f4ef-4637-8541-5b27d488f6ab" (UID: "d7c11796-f4ef-4637-8541-5b27d488f6ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.209825 4900 scope.go:117] "RemoveContainer" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" Dec 02 16:11:59 crc kubenswrapper[4900]: E1202 16:11:59.213742 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8\": container with ID starting with 5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8 not found: ID does not exist" containerID="5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.213951 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8"} err="failed to get container status \"5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8\": rpc error: code = NotFound desc = could not find container \"5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8\": container with ID starting with 5b5a0d3f4d78a73e800f21daf5109e8a0778729820b5a2a3c3d9a9d6670859b8 not found: ID does not exist" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.214032 4900 scope.go:117] "RemoveContainer" containerID="23f26d6d7f0fab6b86022862139e84f21ca42cf481e2b4fc0c241b472792b2a4" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.213896 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff591ec8-ae35-4a05-b7e1-99b63b7125d7","Type":"ContainerDied","Data":"7a3e5e11c81ccc5e48911186af2b2bf1e334db2f5daef7d1d2cb085167a11188"} Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.214798 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.237693 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.247052 4900 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.247082 4900 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c11796-f4ef-4637-8541-5b27d488f6ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.247094 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzkhw\" (UniqueName: \"kubernetes.io/projected/d7c11796-f4ef-4637-8541-5b27d488f6ab-kube-api-access-vzkhw\") on node \"crc\" DevicePath \"\"" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.257052 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.285699 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: E1202 16:11:59.286191 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c11796-f4ef-4637-8541-5b27d488f6ab" containerName="nova-scheduler-scheduler" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286203 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c11796-f4ef-4637-8541-5b27d488f6ab" containerName="nova-scheduler-scheduler" Dec 02 16:11:59 crc kubenswrapper[4900]: E1202 16:11:59.286227 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-log" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286233 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-log" Dec 02 16:11:59 crc kubenswrapper[4900]: E1202 16:11:59.286254 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-metadata" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286260 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-metadata" Dec 02 16:11:59 crc kubenswrapper[4900]: E1202 16:11:59.286276 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-log" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286282 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-log" Dec 02 16:11:59 crc kubenswrapper[4900]: E1202 16:11:59.286297 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-api" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286303 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-api" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286496 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-api" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286507 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c11796-f4ef-4637-8541-5b27d488f6ab" containerName="nova-scheduler-scheduler" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286520 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" containerName="nova-api-log" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286533 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-log" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.286549 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" containerName="nova-metadata-metadata" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.287623 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.296857 4900 scope.go:117] "RemoveContainer" containerID="98716c9c32801896c448a0c761da06478f08646b7cb6d5f2eff8e4deabceb5b7" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.306089 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.307322 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.325592 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.358708 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.358902 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/823a463c-6984-4621-ba0f-b9ad8b7f618c-logs\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.358965 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823a463c-6984-4621-ba0f-b9ad8b7f618c-config-data\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.359111 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xpzv\" (UniqueName: \"kubernetes.io/projected/823a463c-6984-4621-ba0f-b9ad8b7f618c-kube-api-access-4xpzv\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.359148 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823a463c-6984-4621-ba0f-b9ad8b7f618c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.385816 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.386801 4900 scope.go:117] "RemoveContainer" containerID="647b9ca80ba02788c97c620fa0f31986be3035d35e9afccc42a7975903cce145" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.388365 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.394174 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.402853 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.431344 4900 scope.go:117] "RemoveContainer" containerID="e82da1ed9487f2b43c135a4c9c5aacd7a7e6578d47a8c91ecaac3cada80d60e3" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461246 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/823a463c-6984-4621-ba0f-b9ad8b7f618c-logs\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461303 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823a463c-6984-4621-ba0f-b9ad8b7f618c-config-data\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461361 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-logs\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461393 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-config-data\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461417 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461450 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xpzv\" (UniqueName: \"kubernetes.io/projected/823a463c-6984-4621-ba0f-b9ad8b7f618c-kube-api-access-4xpzv\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.461614 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/823a463c-6984-4621-ba0f-b9ad8b7f618c-logs\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.463507 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823a463c-6984-4621-ba0f-b9ad8b7f618c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.463567 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8846t\" (UniqueName: \"kubernetes.io/projected/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-kube-api-access-8846t\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.467675 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823a463c-6984-4621-ba0f-b9ad8b7f618c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.469166 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823a463c-6984-4621-ba0f-b9ad8b7f618c-config-data\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.500741 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.505005 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xpzv\" (UniqueName: \"kubernetes.io/projected/823a463c-6984-4621-ba0f-b9ad8b7f618c-kube-api-access-4xpzv\") pod \"nova-api-0\" (UID: \"823a463c-6984-4621-ba0f-b9ad8b7f618c\") " pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.519567 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.533969 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.535340 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.538546 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.548620 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.565136 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8846t\" (UniqueName: \"kubernetes.io/projected/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-kube-api-access-8846t\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.565290 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-logs\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.565324 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-config-data\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.565347 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.566162 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-logs\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.569559 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.570228 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-config-data\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.582188 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8846t\" (UniqueName: \"kubernetes.io/projected/26c207c8-a3ae-4e45-bcdc-b3840b851d6d-kube-api-access-8846t\") pod \"nova-metadata-0\" (UID: \"26c207c8-a3ae-4e45-bcdc-b3840b851d6d\") " pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.667598 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b06487-b1e6-4657-a114-280a21544af9-config-data\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.667757 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxnhs\" (UniqueName: \"kubernetes.io/projected/83b06487-b1e6-4657-a114-280a21544af9-kube-api-access-xxnhs\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.667865 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b06487-b1e6-4657-a114-280a21544af9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.719225 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.744299 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.770050 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b06487-b1e6-4657-a114-280a21544af9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.770175 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b06487-b1e6-4657-a114-280a21544af9-config-data\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.770298 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxnhs\" (UniqueName: \"kubernetes.io/projected/83b06487-b1e6-4657-a114-280a21544af9-kube-api-access-xxnhs\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.774190 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83b06487-b1e6-4657-a114-280a21544af9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.776771 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83b06487-b1e6-4657-a114-280a21544af9-config-data\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.791276 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxnhs\" (UniqueName: \"kubernetes.io/projected/83b06487-b1e6-4657-a114-280a21544af9-kube-api-access-xxnhs\") pod \"nova-scheduler-0\" (UID: \"83b06487-b1e6-4657-a114-280a21544af9\") " pod="openstack/nova-scheduler-0" Dec 02 16:11:59 crc kubenswrapper[4900]: I1202 16:11:59.952749 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.195343 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 16:12:00 crc kubenswrapper[4900]: W1202 16:12:00.198017 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod823a463c_6984_4621_ba0f_b9ad8b7f618c.slice/crio-be62cc10507fdd7efb6aac6bac6e23fe62bd91505e615630cd6025951ea1d73b WatchSource:0}: Error finding container be62cc10507fdd7efb6aac6bac6e23fe62bd91505e615630cd6025951ea1d73b: Status 404 returned error can't find the container with id be62cc10507fdd7efb6aac6bac6e23fe62bd91505e615630cd6025951ea1d73b Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.238326 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"823a463c-6984-4621-ba0f-b9ad8b7f618c","Type":"ContainerStarted","Data":"be62cc10507fdd7efb6aac6bac6e23fe62bd91505e615630cd6025951ea1d73b"} Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.298008 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.436212 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.923854 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0f74c1-cc44-44ac-a262-eea482b36ca8" path="/var/lib/kubelet/pods/1b0f74c1-cc44-44ac-a262-eea482b36ca8/volumes" Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.925918 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c11796-f4ef-4637-8541-5b27d488f6ab" path="/var/lib/kubelet/pods/d7c11796-f4ef-4637-8541-5b27d488f6ab/volumes" Dec 02 16:12:00 crc kubenswrapper[4900]: I1202 16:12:00.927239 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff591ec8-ae35-4a05-b7e1-99b63b7125d7" path="/var/lib/kubelet/pods/ff591ec8-ae35-4a05-b7e1-99b63b7125d7/volumes" Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.252516 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"823a463c-6984-4621-ba0f-b9ad8b7f618c","Type":"ContainerStarted","Data":"2140fc079bbf632f73a1a6a350325954887b5c652991626ec72c86c51af3e144"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.252567 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"823a463c-6984-4621-ba0f-b9ad8b7f618c","Type":"ContainerStarted","Data":"7135d0828ded60d2a3fa75b4714001dcf090dc28fc2a2afb1f19db34c0ce0105"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.266093 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26c207c8-a3ae-4e45-bcdc-b3840b851d6d","Type":"ContainerStarted","Data":"b18438be4b6741cfac096c10a27ce1f857f68621e2111ecf173ffcc2f1e76faa"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.266485 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26c207c8-a3ae-4e45-bcdc-b3840b851d6d","Type":"ContainerStarted","Data":"62069adab4b283fb7415c583781f131bd43a85820e6a405ff5b4cf2586572558"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.266512 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26c207c8-a3ae-4e45-bcdc-b3840b851d6d","Type":"ContainerStarted","Data":"f3aa6298516f02d65e231d4afba6a44f11121985b7dcb10e7c7bb92a6a4c111e"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.268334 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83b06487-b1e6-4657-a114-280a21544af9","Type":"ContainerStarted","Data":"6b1a72dc6569d8589dd189fe6e90d83688915ac35c521af50c8774134cf5efc0"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.268373 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83b06487-b1e6-4657-a114-280a21544af9","Type":"ContainerStarted","Data":"674af0cf732151bcb11674cde8d095211c39f493eadc2eba97fe7f7f80fe9411"} Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.298218 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.298196257 podStartE2EDuration="2.298196257s" podCreationTimestamp="2025-12-02 16:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:12:01.287908115 +0000 UTC m=+8966.703721976" watchObservedRunningTime="2025-12-02 16:12:01.298196257 +0000 UTC m=+8966.714010118" Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.327577 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.327555969 podStartE2EDuration="2.327555969s" podCreationTimestamp="2025-12-02 16:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:12:01.313287874 +0000 UTC m=+8966.729101755" watchObservedRunningTime="2025-12-02 16:12:01.327555969 +0000 UTC m=+8966.743369830" Dec 02 16:12:01 crc kubenswrapper[4900]: I1202 16:12:01.346876 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.346859136 podStartE2EDuration="2.346859136s" podCreationTimestamp="2025-12-02 16:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 16:12:01.336315137 +0000 UTC m=+8966.752128988" watchObservedRunningTime="2025-12-02 16:12:01.346859136 +0000 UTC m=+8966.762672997" Dec 02 16:12:04 crc kubenswrapper[4900]: I1202 16:12:04.745338 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 16:12:04 crc kubenswrapper[4900]: I1202 16:12:04.745899 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 16:12:04 crc kubenswrapper[4900]: I1202 16:12:04.953483 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 16:12:05 crc kubenswrapper[4900]: I1202 16:12:05.815451 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 16:12:08 crc kubenswrapper[4900]: I1202 16:12:08.328067 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 16:12:09 crc kubenswrapper[4900]: I1202 16:12:09.719875 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 16:12:09 crc kubenswrapper[4900]: I1202 16:12:09.720343 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 16:12:09 crc kubenswrapper[4900]: I1202 16:12:09.745290 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 16:12:09 crc kubenswrapper[4900]: I1202 16:12:09.745370 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 16:12:09 crc kubenswrapper[4900]: I1202 16:12:09.953497 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 16:12:10 crc kubenswrapper[4900]: I1202 16:12:10.389995 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 16:12:10 crc kubenswrapper[4900]: I1202 16:12:10.457958 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 16:12:10 crc kubenswrapper[4900]: I1202 16:12:10.884850 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="823a463c-6984-4621-ba0f-b9ad8b7f618c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 16:12:10 crc kubenswrapper[4900]: I1202 16:12:10.884883 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26c207c8-a3ae-4e45-bcdc-b3840b851d6d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.191:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 16:12:10 crc kubenswrapper[4900]: I1202 16:12:10.884991 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="823a463c-6984-4621-ba0f-b9ad8b7f618c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 16:12:10 crc kubenswrapper[4900]: I1202 16:12:10.885029 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26c207c8-a3ae-4e45-bcdc-b3840b851d6d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.191:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 16:12:15 crc kubenswrapper[4900]: I1202 16:12:15.117160 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:12:15 crc kubenswrapper[4900]: I1202 16:12:15.117796 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.725178 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.728477 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.728944 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.729015 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.732770 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.739901 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.750125 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.750773 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.754838 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 16:12:19 crc kubenswrapper[4900]: I1202 16:12:19.756770 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.116850 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.118787 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.118891 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.119512 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5240b0b5c44f58e14a690f455d362fc37b274a6424c43bb66b5a66a5725e9f7f"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.119636 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://5240b0b5c44f58e14a690f455d362fc37b274a6424c43bb66b5a66a5725e9f7f" gracePeriod=600 Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.866893 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="5240b0b5c44f58e14a690f455d362fc37b274a6424c43bb66b5a66a5725e9f7f" exitCode=0 Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.866961 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"5240b0b5c44f58e14a690f455d362fc37b274a6424c43bb66b5a66a5725e9f7f"} Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.867471 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f"} Dec 02 16:12:45 crc kubenswrapper[4900]: I1202 16:12:45.867493 4900 scope.go:117] "RemoveContainer" containerID="a39cad82aeed8c84de232689dd43093f62ed06a6b38cee31917bc75e441065bb" Dec 02 16:14:31 crc kubenswrapper[4900]: I1202 16:14:31.377384 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-64d9497c5c-zxhv8" podUID="7c81f731-76e5-4d22-ba31-c6fcaf3f699c" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 02 16:14:45 crc kubenswrapper[4900]: I1202 16:14:45.117246 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:14:45 crc kubenswrapper[4900]: I1202 16:14:45.118362 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.181966 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m"] Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.184826 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.187325 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.188822 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.194670 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m"] Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.376945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48jn\" (UniqueName: \"kubernetes.io/projected/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-kube-api-access-f48jn\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.377372 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-secret-volume\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.377519 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-config-volume\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.479591 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-secret-volume\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.479673 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-config-volume\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.479807 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48jn\" (UniqueName: \"kubernetes.io/projected/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-kube-api-access-f48jn\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.480865 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-config-volume\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.491746 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-secret-volume\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.501954 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48jn\" (UniqueName: \"kubernetes.io/projected/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-kube-api-access-f48jn\") pod \"collect-profiles-29411535-wb92m\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:00 crc kubenswrapper[4900]: I1202 16:15:00.516122 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:01 crc kubenswrapper[4900]: I1202 16:15:01.025748 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m"] Dec 02 16:15:01 crc kubenswrapper[4900]: I1202 16:15:01.589477 4900 generic.go:334] "Generic (PLEG): container finished" podID="e0c52cc9-1c14-4f3b-8265-5f0774f19abb" containerID="f22db9386c62aebdc4e4fdb794565469f82a6243494981a375af26f0af6eaad9" exitCode=0 Dec 02 16:15:01 crc kubenswrapper[4900]: I1202 16:15:01.589529 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" event={"ID":"e0c52cc9-1c14-4f3b-8265-5f0774f19abb","Type":"ContainerDied","Data":"f22db9386c62aebdc4e4fdb794565469f82a6243494981a375af26f0af6eaad9"} Dec 02 16:15:01 crc kubenswrapper[4900]: I1202 16:15:01.590719 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" event={"ID":"e0c52cc9-1c14-4f3b-8265-5f0774f19abb","Type":"ContainerStarted","Data":"1b4856f9cedf1f9b5369d056853eef208b2279878499cec6cc2e5efdf750fa65"} Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.022490 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.152814 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-secret-volume\") pod \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.153500 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48jn\" (UniqueName: \"kubernetes.io/projected/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-kube-api-access-f48jn\") pod \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.153666 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-config-volume\") pod \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\" (UID: \"e0c52cc9-1c14-4f3b-8265-5f0774f19abb\") " Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.155221 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0c52cc9-1c14-4f3b-8265-5f0774f19abb" (UID: "e0c52cc9-1c14-4f3b-8265-5f0774f19abb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.160324 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0c52cc9-1c14-4f3b-8265-5f0774f19abb" (UID: "e0c52cc9-1c14-4f3b-8265-5f0774f19abb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.160373 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-kube-api-access-f48jn" (OuterVolumeSpecName: "kube-api-access-f48jn") pod "e0c52cc9-1c14-4f3b-8265-5f0774f19abb" (UID: "e0c52cc9-1c14-4f3b-8265-5f0774f19abb"). InnerVolumeSpecName "kube-api-access-f48jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.256528 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48jn\" (UniqueName: \"kubernetes.io/projected/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-kube-api-access-f48jn\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.256581 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.256602 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c52cc9-1c14-4f3b-8265-5f0774f19abb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.621159 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" event={"ID":"e0c52cc9-1c14-4f3b-8265-5f0774f19abb","Type":"ContainerDied","Data":"1b4856f9cedf1f9b5369d056853eef208b2279878499cec6cc2e5efdf750fa65"} Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.621217 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b4856f9cedf1f9b5369d056853eef208b2279878499cec6cc2e5efdf750fa65" Dec 02 16:15:03 crc kubenswrapper[4900]: I1202 16:15:03.621264 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411535-wb92m" Dec 02 16:15:04 crc kubenswrapper[4900]: I1202 16:15:04.141763 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl"] Dec 02 16:15:04 crc kubenswrapper[4900]: I1202 16:15:04.155611 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411490-fb6zl"] Dec 02 16:15:04 crc kubenswrapper[4900]: I1202 16:15:04.926386 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ab412e-e6b6-4de9-a4c0-e4d23fc46826" path="/var/lib/kubelet/pods/d8ab412e-e6b6-4de9-a4c0-e4d23fc46826/volumes" Dec 02 16:15:14 crc kubenswrapper[4900]: I1202 16:15:14.763361 4900 scope.go:117] "RemoveContainer" containerID="7a1abc76993582bd8edc1075f75236b6cc494e3372fa5f49e60a8aca698d7d38" Dec 02 16:15:15 crc kubenswrapper[4900]: I1202 16:15:15.116589 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:15:15 crc kubenswrapper[4900]: I1202 16:15:15.117213 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.735876 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwkrz"] Dec 02 16:15:17 crc kubenswrapper[4900]: E1202 16:15:17.736950 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c52cc9-1c14-4f3b-8265-5f0774f19abb" containerName="collect-profiles" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.736964 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c52cc9-1c14-4f3b-8265-5f0774f19abb" containerName="collect-profiles" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.737180 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c52cc9-1c14-4f3b-8265-5f0774f19abb" containerName="collect-profiles" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.739073 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.757668 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwkrz"] Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.899945 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdpb\" (UniqueName: \"kubernetes.io/projected/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-kube-api-access-ngdpb\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.900262 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-utilities\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:17 crc kubenswrapper[4900]: I1202 16:15:17.900337 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-catalog-content\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.002722 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdpb\" (UniqueName: \"kubernetes.io/projected/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-kube-api-access-ngdpb\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.003517 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-utilities\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.003667 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-catalog-content\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.003868 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-utilities\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.004089 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-catalog-content\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.029261 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdpb\" (UniqueName: \"kubernetes.io/projected/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-kube-api-access-ngdpb\") pod \"redhat-marketplace-cwkrz\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.071771 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.684465 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwkrz"] Dec 02 16:15:18 crc kubenswrapper[4900]: I1202 16:15:18.789307 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerStarted","Data":"9fd6c09a523c5ae5ff4174daf00baf0d16150ff581da66e5fe4d9948509d3d7e"} Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.540197 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g57r8"] Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.545874 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.558203 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g57r8"] Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.744263 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8qk\" (UniqueName: \"kubernetes.io/projected/24b02e59-d93b-4106-b2b0-8e525935734b-kube-api-access-wc8qk\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.744341 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-catalog-content\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.744395 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-utilities\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.810513 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerID="d0fbc48ff2caa6371421ad576e97109be251e90e71735aee6eaaea970b793dac" exitCode=0 Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.810578 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerDied","Data":"d0fbc48ff2caa6371421ad576e97109be251e90e71735aee6eaaea970b793dac"} Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.815097 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.846176 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8qk\" (UniqueName: \"kubernetes.io/projected/24b02e59-d93b-4106-b2b0-8e525935734b-kube-api-access-wc8qk\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.846254 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-catalog-content\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.846307 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-utilities\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.847149 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-utilities\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:19 crc kubenswrapper[4900]: I1202 16:15:19.847340 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-catalog-content\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:20 crc kubenswrapper[4900]: I1202 16:15:20.455070 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8qk\" (UniqueName: \"kubernetes.io/projected/24b02e59-d93b-4106-b2b0-8e525935734b-kube-api-access-wc8qk\") pod \"redhat-operators-g57r8\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:20 crc kubenswrapper[4900]: I1202 16:15:20.472954 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:20 crc kubenswrapper[4900]: I1202 16:15:20.837447 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerStarted","Data":"009c9b7e44e2d6d041803c67713f37a4274f87c85ec3daf04e0b77afa6659abe"} Dec 02 16:15:21 crc kubenswrapper[4900]: I1202 16:15:21.007396 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g57r8"] Dec 02 16:15:21 crc kubenswrapper[4900]: I1202 16:15:21.854591 4900 generic.go:334] "Generic (PLEG): container finished" podID="24b02e59-d93b-4106-b2b0-8e525935734b" containerID="1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d" exitCode=0 Dec 02 16:15:21 crc kubenswrapper[4900]: I1202 16:15:21.854689 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerDied","Data":"1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d"} Dec 02 16:15:21 crc kubenswrapper[4900]: I1202 16:15:21.855260 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerStarted","Data":"f564a6834071ccb250aff02b20c57ea93a04d4eaa58af353c58f9771d04b771a"} Dec 02 16:15:21 crc kubenswrapper[4900]: I1202 16:15:21.859891 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerID="009c9b7e44e2d6d041803c67713f37a4274f87c85ec3daf04e0b77afa6659abe" exitCode=0 Dec 02 16:15:21 crc kubenswrapper[4900]: I1202 16:15:21.859935 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerDied","Data":"009c9b7e44e2d6d041803c67713f37a4274f87c85ec3daf04e0b77afa6659abe"} Dec 02 16:15:23 crc kubenswrapper[4900]: I1202 16:15:23.879093 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerStarted","Data":"d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78"} Dec 02 16:15:23 crc kubenswrapper[4900]: I1202 16:15:23.885227 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerStarted","Data":"95c8a3e1461bbf5f6888a969336e9cbfd8749116799139ed28262b73a4bf05c3"} Dec 02 16:15:23 crc kubenswrapper[4900]: I1202 16:15:23.928982 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwkrz" podStartSLOduration=4.400546194 podStartE2EDuration="6.92896033s" podCreationTimestamp="2025-12-02 16:15:17 +0000 UTC" firstStartedPulling="2025-12-02 16:15:19.81481975 +0000 UTC m=+9165.230633611" lastFinishedPulling="2025-12-02 16:15:22.343233896 +0000 UTC m=+9167.759047747" observedRunningTime="2025-12-02 16:15:23.916905439 +0000 UTC m=+9169.332719290" watchObservedRunningTime="2025-12-02 16:15:23.92896033 +0000 UTC m=+9169.344774181" Dec 02 16:15:25 crc kubenswrapper[4900]: I1202 16:15:25.912975 4900 generic.go:334] "Generic (PLEG): container finished" podID="24b02e59-d93b-4106-b2b0-8e525935734b" containerID="d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78" exitCode=0 Dec 02 16:15:25 crc kubenswrapper[4900]: I1202 16:15:25.913057 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerDied","Data":"d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78"} Dec 02 16:15:26 crc kubenswrapper[4900]: I1202 16:15:26.929635 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerStarted","Data":"9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87"} Dec 02 16:15:26 crc kubenswrapper[4900]: I1202 16:15:26.986728 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g57r8" podStartSLOduration=3.531369367 podStartE2EDuration="7.986709827s" podCreationTimestamp="2025-12-02 16:15:19 +0000 UTC" firstStartedPulling="2025-12-02 16:15:21.859087997 +0000 UTC m=+9167.274901878" lastFinishedPulling="2025-12-02 16:15:26.314428447 +0000 UTC m=+9171.730242338" observedRunningTime="2025-12-02 16:15:26.972583097 +0000 UTC m=+9172.388396958" watchObservedRunningTime="2025-12-02 16:15:26.986709827 +0000 UTC m=+9172.402523688" Dec 02 16:15:28 crc kubenswrapper[4900]: I1202 16:15:28.072078 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:28 crc kubenswrapper[4900]: I1202 16:15:28.072164 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:28 crc kubenswrapper[4900]: I1202 16:15:28.148609 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:29 crc kubenswrapper[4900]: I1202 16:15:29.011809 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:30 crc kubenswrapper[4900]: I1202 16:15:30.324955 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwkrz"] Dec 02 16:15:30 crc kubenswrapper[4900]: I1202 16:15:30.474094 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:30 crc kubenswrapper[4900]: I1202 16:15:30.474152 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:31 crc kubenswrapper[4900]: I1202 16:15:31.571607 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g57r8" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="registry-server" probeResult="failure" output=< Dec 02 16:15:31 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 16:15:31 crc kubenswrapper[4900]: > Dec 02 16:15:32 crc kubenswrapper[4900]: I1202 16:15:32.022296 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwkrz" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="registry-server" containerID="cri-o://95c8a3e1461bbf5f6888a969336e9cbfd8749116799139ed28262b73a4bf05c3" gracePeriod=2 Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.039111 4900 generic.go:334] "Generic (PLEG): container finished" podID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerID="95c8a3e1461bbf5f6888a969336e9cbfd8749116799139ed28262b73a4bf05c3" exitCode=0 Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.039239 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerDied","Data":"95c8a3e1461bbf5f6888a969336e9cbfd8749116799139ed28262b73a4bf05c3"} Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.172150 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.276788 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-utilities\") pod \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.277793 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-utilities" (OuterVolumeSpecName: "utilities") pod "ff25d9fa-a427-48ec-83e0-22cff47a5eb0" (UID: "ff25d9fa-a427-48ec-83e0-22cff47a5eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.277914 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdpb\" (UniqueName: \"kubernetes.io/projected/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-kube-api-access-ngdpb\") pod \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.278257 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-catalog-content\") pod \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\" (UID: \"ff25d9fa-a427-48ec-83e0-22cff47a5eb0\") " Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.279085 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.294358 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff25d9fa-a427-48ec-83e0-22cff47a5eb0" (UID: "ff25d9fa-a427-48ec-83e0-22cff47a5eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.299740 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-kube-api-access-ngdpb" (OuterVolumeSpecName: "kube-api-access-ngdpb") pod "ff25d9fa-a427-48ec-83e0-22cff47a5eb0" (UID: "ff25d9fa-a427-48ec-83e0-22cff47a5eb0"). InnerVolumeSpecName "kube-api-access-ngdpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.381595 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:33 crc kubenswrapper[4900]: I1202 16:15:33.381667 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdpb\" (UniqueName: \"kubernetes.io/projected/ff25d9fa-a427-48ec-83e0-22cff47a5eb0-kube-api-access-ngdpb\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.055122 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwkrz" event={"ID":"ff25d9fa-a427-48ec-83e0-22cff47a5eb0","Type":"ContainerDied","Data":"9fd6c09a523c5ae5ff4174daf00baf0d16150ff581da66e5fe4d9948509d3d7e"} Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.055718 4900 scope.go:117] "RemoveContainer" containerID="95c8a3e1461bbf5f6888a969336e9cbfd8749116799139ed28262b73a4bf05c3" Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.055189 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwkrz" Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.114442 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwkrz"] Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.117581 4900 scope.go:117] "RemoveContainer" containerID="009c9b7e44e2d6d041803c67713f37a4274f87c85ec3daf04e0b77afa6659abe" Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.127981 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwkrz"] Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.489795 4900 scope.go:117] "RemoveContainer" containerID="d0fbc48ff2caa6371421ad576e97109be251e90e71735aee6eaaea970b793dac" Dec 02 16:15:34 crc kubenswrapper[4900]: I1202 16:15:34.932159 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" path="/var/lib/kubelet/pods/ff25d9fa-a427-48ec-83e0-22cff47a5eb0/volumes" Dec 02 16:15:40 crc kubenswrapper[4900]: I1202 16:15:40.582318 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:40 crc kubenswrapper[4900]: I1202 16:15:40.637151 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:40 crc kubenswrapper[4900]: I1202 16:15:40.824406 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g57r8"] Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.171318 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g57r8" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="registry-server" containerID="cri-o://9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87" gracePeriod=2 Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.714675 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.736600 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-catalog-content\") pod \"24b02e59-d93b-4106-b2b0-8e525935734b\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.736795 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-utilities\") pod \"24b02e59-d93b-4106-b2b0-8e525935734b\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.736913 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc8qk\" (UniqueName: \"kubernetes.io/projected/24b02e59-d93b-4106-b2b0-8e525935734b-kube-api-access-wc8qk\") pod \"24b02e59-d93b-4106-b2b0-8e525935734b\" (UID: \"24b02e59-d93b-4106-b2b0-8e525935734b\") " Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.737755 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-utilities" (OuterVolumeSpecName: "utilities") pod "24b02e59-d93b-4106-b2b0-8e525935734b" (UID: "24b02e59-d93b-4106-b2b0-8e525935734b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.744790 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b02e59-d93b-4106-b2b0-8e525935734b-kube-api-access-wc8qk" (OuterVolumeSpecName: "kube-api-access-wc8qk") pod "24b02e59-d93b-4106-b2b0-8e525935734b" (UID: "24b02e59-d93b-4106-b2b0-8e525935734b"). InnerVolumeSpecName "kube-api-access-wc8qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.840240 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.840298 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc8qk\" (UniqueName: \"kubernetes.io/projected/24b02e59-d93b-4106-b2b0-8e525935734b-kube-api-access-wc8qk\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.853594 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24b02e59-d93b-4106-b2b0-8e525935734b" (UID: "24b02e59-d93b-4106-b2b0-8e525935734b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:15:42 crc kubenswrapper[4900]: I1202 16:15:42.943708 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b02e59-d93b-4106-b2b0-8e525935734b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.185563 4900 generic.go:334] "Generic (PLEG): container finished" podID="24b02e59-d93b-4106-b2b0-8e525935734b" containerID="9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87" exitCode=0 Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.185616 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerDied","Data":"9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87"} Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.185664 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g57r8" event={"ID":"24b02e59-d93b-4106-b2b0-8e525935734b","Type":"ContainerDied","Data":"f564a6834071ccb250aff02b20c57ea93a04d4eaa58af353c58f9771d04b771a"} Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.185687 4900 scope.go:117] "RemoveContainer" containerID="9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.185843 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g57r8" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.224793 4900 scope.go:117] "RemoveContainer" containerID="d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.226957 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g57r8"] Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.242824 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g57r8"] Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.266072 4900 scope.go:117] "RemoveContainer" containerID="1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.334686 4900 scope.go:117] "RemoveContainer" containerID="9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87" Dec 02 16:15:43 crc kubenswrapper[4900]: E1202 16:15:43.335571 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87\": container with ID starting with 9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87 not found: ID does not exist" containerID="9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.335621 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87"} err="failed to get container status \"9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87\": rpc error: code = NotFound desc = could not find container \"9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87\": container with ID starting with 9a43d833d2d4d2fea8c7ca07626cac0f85aef655e44bfbdbd7509f2a04a37d87 not found: ID does not exist" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.335668 4900 scope.go:117] "RemoveContainer" containerID="d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78" Dec 02 16:15:43 crc kubenswrapper[4900]: E1202 16:15:43.336293 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78\": container with ID starting with d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78 not found: ID does not exist" containerID="d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.336331 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78"} err="failed to get container status \"d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78\": rpc error: code = NotFound desc = could not find container \"d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78\": container with ID starting with d90fe919419bdcc323b5323bedd0960d18ee684d68ac55436cad16440799be78 not found: ID does not exist" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.336359 4900 scope.go:117] "RemoveContainer" containerID="1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d" Dec 02 16:15:43 crc kubenswrapper[4900]: E1202 16:15:43.336720 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d\": container with ID starting with 1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d not found: ID does not exist" containerID="1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d" Dec 02 16:15:43 crc kubenswrapper[4900]: I1202 16:15:43.336754 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d"} err="failed to get container status \"1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d\": rpc error: code = NotFound desc = could not find container \"1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d\": container with ID starting with 1245c06b82e8dad9bd7090bab62620465be4b196b07645deec2cd6aa9f1d506d not found: ID does not exist" Dec 02 16:15:44 crc kubenswrapper[4900]: I1202 16:15:44.924179 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" path="/var/lib/kubelet/pods/24b02e59-d93b-4106-b2b0-8e525935734b/volumes" Dec 02 16:15:45 crc kubenswrapper[4900]: I1202 16:15:45.116492 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:15:45 crc kubenswrapper[4900]: I1202 16:15:45.116926 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:15:45 crc kubenswrapper[4900]: I1202 16:15:45.116978 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 16:15:45 crc kubenswrapper[4900]: I1202 16:15:45.119550 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 16:15:45 crc kubenswrapper[4900]: I1202 16:15:45.119627 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" gracePeriod=600 Dec 02 16:15:45 crc kubenswrapper[4900]: E1202 16:15:45.245317 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:15:46 crc kubenswrapper[4900]: I1202 16:15:46.218361 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" exitCode=0 Dec 02 16:15:46 crc kubenswrapper[4900]: I1202 16:15:46.218430 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f"} Dec 02 16:15:46 crc kubenswrapper[4900]: I1202 16:15:46.218498 4900 scope.go:117] "RemoveContainer" containerID="5240b0b5c44f58e14a690f455d362fc37b274a6424c43bb66b5a66a5725e9f7f" Dec 02 16:15:46 crc kubenswrapper[4900]: I1202 16:15:46.219015 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:15:46 crc kubenswrapper[4900]: E1202 16:15:46.219302 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:15:58 crc kubenswrapper[4900]: I1202 16:15:58.910131 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:15:58 crc kubenswrapper[4900]: E1202 16:15:58.911382 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:16:13 crc kubenswrapper[4900]: I1202 16:16:13.910638 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:16:13 crc kubenswrapper[4900]: E1202 16:16:13.911740 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:16:27 crc kubenswrapper[4900]: I1202 16:16:27.911550 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:16:27 crc kubenswrapper[4900]: E1202 16:16:27.912860 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:16:40 crc kubenswrapper[4900]: I1202 16:16:40.911356 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:16:40 crc kubenswrapper[4900]: E1202 16:16:40.912292 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:16:53 crc kubenswrapper[4900]: I1202 16:16:53.911299 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:16:53 crc kubenswrapper[4900]: E1202 16:16:53.912189 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:17:06 crc kubenswrapper[4900]: I1202 16:17:06.910087 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:17:06 crc kubenswrapper[4900]: E1202 16:17:06.910951 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:17:20 crc kubenswrapper[4900]: I1202 16:17:20.911223 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:17:20 crc kubenswrapper[4900]: E1202 16:17:20.912730 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:17:32 crc kubenswrapper[4900]: I1202 16:17:32.910959 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:17:32 crc kubenswrapper[4900]: E1202 16:17:32.913829 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:17:45 crc kubenswrapper[4900]: I1202 16:17:45.911172 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:17:45 crc kubenswrapper[4900]: E1202 16:17:45.912029 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:17:58 crc kubenswrapper[4900]: I1202 16:17:58.910941 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:17:58 crc kubenswrapper[4900]: E1202 16:17:58.912213 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:18:06 crc kubenswrapper[4900]: I1202 16:18:06.947514 4900 generic.go:334] "Generic (PLEG): container finished" podID="a40ced8a-8021-4c6e-8381-4e587bdb7f04" containerID="b22d5e9e681ac0a24a20f63825a6db4da9e3a414c2cf5fab0c2c1b96b2d3b0d3" exitCode=0 Dec 02 16:18:06 crc kubenswrapper[4900]: I1202 16:18:06.948309 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" event={"ID":"a40ced8a-8021-4c6e-8381-4e587bdb7f04","Type":"ContainerDied","Data":"b22d5e9e681ac0a24a20f63825a6db4da9e3a414c2cf5fab0c2c1b96b2d3b0d3"} Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.623325 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704257 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbn6n\" (UniqueName: \"kubernetes.io/projected/a40ced8a-8021-4c6e-8381-4e587bdb7f04-kube-api-access-sbn6n\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704352 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-0\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704408 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-1\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704445 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-1\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704545 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-0\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704614 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-combined-ca-bundle\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704699 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-1\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704727 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-inventory\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704750 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ssh-key\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704812 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ceph\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.704889 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-0\") pod \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\" (UID: \"a40ced8a-8021-4c6e-8381-4e587bdb7f04\") " Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.725933 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ceph" (OuterVolumeSpecName: "ceph") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.736014 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.736051 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40ced8a-8021-4c6e-8381-4e587bdb7f04-kube-api-access-sbn6n" (OuterVolumeSpecName: "kube-api-access-sbn6n") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "kube-api-access-sbn6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.764871 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.765999 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.766131 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-inventory" (OuterVolumeSpecName: "inventory") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.772200 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.775299 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.775624 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.780721 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.784306 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a40ced8a-8021-4c6e-8381-4e587bdb7f04" (UID: "a40ced8a-8021-4c6e-8381-4e587bdb7f04"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808096 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808144 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808157 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808171 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808183 4900 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808199 4900 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808213 4900 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808225 4900 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808236 4900 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808247 4900 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a40ced8a-8021-4c6e-8381-4e587bdb7f04-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.808258 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbn6n\" (UniqueName: \"kubernetes.io/projected/a40ced8a-8021-4c6e-8381-4e587bdb7f04-kube-api-access-sbn6n\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.978320 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" event={"ID":"a40ced8a-8021-4c6e-8381-4e587bdb7f04","Type":"ContainerDied","Data":"1e9853f19dc919be2bd278859bf4c03d621b42a1216f51663ef0fd8f10717303"} Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.978369 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns" Dec 02 16:18:08 crc kubenswrapper[4900]: I1202 16:18:08.978378 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9853f19dc919be2bd278859bf4c03d621b42a1216f51663ef0fd8f10717303" Dec 02 16:18:10 crc kubenswrapper[4900]: I1202 16:18:10.910606 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:18:10 crc kubenswrapper[4900]: E1202 16:18:10.911267 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:18:24 crc kubenswrapper[4900]: I1202 16:18:24.916102 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:18:24 crc kubenswrapper[4900]: E1202 16:18:24.916935 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.601320 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5dmz9"] Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602720 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="extract-content" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602737 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="extract-content" Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602752 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="registry-server" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602760 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="registry-server" Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602774 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="extract-utilities" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602783 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="extract-utilities" Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602824 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="registry-server" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602832 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="registry-server" Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602843 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="extract-content" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602851 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="extract-content" Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602874 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="extract-utilities" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602883 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="extract-utilities" Dec 02 16:18:29 crc kubenswrapper[4900]: E1202 16:18:29.602905 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40ced8a-8021-4c6e-8381-4e587bdb7f04" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.602917 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40ced8a-8021-4c6e-8381-4e587bdb7f04" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.603201 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b02e59-d93b-4106-b2b0-8e525935734b" containerName="registry-server" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.603224 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff25d9fa-a427-48ec-83e0-22cff47a5eb0" containerName="registry-server" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.603247 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40ced8a-8021-4c6e-8381-4e587bdb7f04" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.605439 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.626181 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dmz9"] Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.750778 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-utilities\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.750859 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-catalog-content\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.750891 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqq6r\" (UniqueName: \"kubernetes.io/projected/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-kube-api-access-zqq6r\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.852850 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-utilities\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.852955 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-catalog-content\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.852993 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqq6r\" (UniqueName: \"kubernetes.io/projected/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-kube-api-access-zqq6r\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.853657 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-catalog-content\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.853698 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-utilities\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.876985 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqq6r\" (UniqueName: \"kubernetes.io/projected/788f8fe8-a1c6-4fb2-a117-88ffe447aec2-kube-api-access-zqq6r\") pod \"community-operators-5dmz9\" (UID: \"788f8fe8-a1c6-4fb2-a117-88ffe447aec2\") " pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:29 crc kubenswrapper[4900]: I1202 16:18:29.941676 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:30 crc kubenswrapper[4900]: I1202 16:18:30.485580 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dmz9"] Dec 02 16:18:31 crc kubenswrapper[4900]: I1202 16:18:31.277439 4900 generic.go:334] "Generic (PLEG): container finished" podID="788f8fe8-a1c6-4fb2-a117-88ffe447aec2" containerID="5862cf9224704291e5792f4dc586994256fafbe1e71a2f964e723340f37bf481" exitCode=0 Dec 02 16:18:31 crc kubenswrapper[4900]: I1202 16:18:31.277562 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dmz9" event={"ID":"788f8fe8-a1c6-4fb2-a117-88ffe447aec2","Type":"ContainerDied","Data":"5862cf9224704291e5792f4dc586994256fafbe1e71a2f964e723340f37bf481"} Dec 02 16:18:31 crc kubenswrapper[4900]: I1202 16:18:31.277922 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dmz9" event={"ID":"788f8fe8-a1c6-4fb2-a117-88ffe447aec2","Type":"ContainerStarted","Data":"55808a19249ff11620dacebaf0d6454f6f32fe1cd691fac73c5f7df66565f17e"} Dec 02 16:18:37 crc kubenswrapper[4900]: I1202 16:18:37.910322 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:18:37 crc kubenswrapper[4900]: E1202 16:18:37.911071 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:18:38 crc kubenswrapper[4900]: I1202 16:18:38.400713 4900 generic.go:334] "Generic (PLEG): container finished" podID="788f8fe8-a1c6-4fb2-a117-88ffe447aec2" containerID="a9ebc8a24cd5cdfb82cf797aabcdbbd727d58737689277aaa0bca1a3d7ec1ffa" exitCode=0 Dec 02 16:18:38 crc kubenswrapper[4900]: I1202 16:18:38.400764 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dmz9" event={"ID":"788f8fe8-a1c6-4fb2-a117-88ffe447aec2","Type":"ContainerDied","Data":"a9ebc8a24cd5cdfb82cf797aabcdbbd727d58737689277aaa0bca1a3d7ec1ffa"} Dec 02 16:18:39 crc kubenswrapper[4900]: I1202 16:18:39.413820 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5dmz9" event={"ID":"788f8fe8-a1c6-4fb2-a117-88ffe447aec2","Type":"ContainerStarted","Data":"96e4309c096bdab68e6b6dbebef4c1d21206a02e0d72274e6a58ecee23793799"} Dec 02 16:18:39 crc kubenswrapper[4900]: I1202 16:18:39.441563 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5dmz9" podStartSLOduration=2.849767623 podStartE2EDuration="10.441544021s" podCreationTimestamp="2025-12-02 16:18:29 +0000 UTC" firstStartedPulling="2025-12-02 16:18:31.281953706 +0000 UTC m=+9356.697767587" lastFinishedPulling="2025-12-02 16:18:38.873730084 +0000 UTC m=+9364.289543985" observedRunningTime="2025-12-02 16:18:39.431904436 +0000 UTC m=+9364.847718287" watchObservedRunningTime="2025-12-02 16:18:39.441544021 +0000 UTC m=+9364.857357872" Dec 02 16:18:39 crc kubenswrapper[4900]: I1202 16:18:39.942499 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:39 crc kubenswrapper[4900]: I1202 16:18:39.942581 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:41 crc kubenswrapper[4900]: I1202 16:18:41.010871 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5dmz9" podUID="788f8fe8-a1c6-4fb2-a117-88ffe447aec2" containerName="registry-server" probeResult="failure" output=< Dec 02 16:18:41 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 16:18:41 crc kubenswrapper[4900]: > Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.016259 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.072295 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5dmz9" Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.136741 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5dmz9"] Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.293795 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.294146 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7fm2c" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="registry-server" containerID="cri-o://dd0902bad7b8d8d520b08c2f86be4a5d65742a94a1bf764f18f92d46164be823" gracePeriod=2 Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.611368 4900 generic.go:334] "Generic (PLEG): container finished" podID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerID="dd0902bad7b8d8d520b08c2f86be4a5d65742a94a1bf764f18f92d46164be823" exitCode=0 Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.611582 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fm2c" event={"ID":"8d5c6da4-dfc6-43be-9124-6107cc309880","Type":"ContainerDied","Data":"dd0902bad7b8d8d520b08c2f86be4a5d65742a94a1bf764f18f92d46164be823"} Dec 02 16:18:50 crc kubenswrapper[4900]: I1202 16:18:50.952242 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.098609 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-catalog-content\") pod \"8d5c6da4-dfc6-43be-9124-6107cc309880\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.098786 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-utilities\") pod \"8d5c6da4-dfc6-43be-9124-6107cc309880\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.098837 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6794\" (UniqueName: \"kubernetes.io/projected/8d5c6da4-dfc6-43be-9124-6107cc309880-kube-api-access-x6794\") pod \"8d5c6da4-dfc6-43be-9124-6107cc309880\" (UID: \"8d5c6da4-dfc6-43be-9124-6107cc309880\") " Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.100219 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-utilities" (OuterVolumeSpecName: "utilities") pod "8d5c6da4-dfc6-43be-9124-6107cc309880" (UID: "8d5c6da4-dfc6-43be-9124-6107cc309880"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.121346 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5c6da4-dfc6-43be-9124-6107cc309880-kube-api-access-x6794" (OuterVolumeSpecName: "kube-api-access-x6794") pod "8d5c6da4-dfc6-43be-9124-6107cc309880" (UID: "8d5c6da4-dfc6-43be-9124-6107cc309880"). InnerVolumeSpecName "kube-api-access-x6794". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.163401 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d5c6da4-dfc6-43be-9124-6107cc309880" (UID: "8d5c6da4-dfc6-43be-9124-6107cc309880"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.201906 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.201939 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6794\" (UniqueName: \"kubernetes.io/projected/8d5c6da4-dfc6-43be-9124-6107cc309880-kube-api-access-x6794\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.201948 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5c6da4-dfc6-43be-9124-6107cc309880-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.623736 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fm2c" event={"ID":"8d5c6da4-dfc6-43be-9124-6107cc309880","Type":"ContainerDied","Data":"c92ffa1b39ce97beb2f66ccae5e5e29f5bb27fd2a90ad0820aedf3e8921d6555"} Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.623768 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fm2c" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.623816 4900 scope.go:117] "RemoveContainer" containerID="dd0902bad7b8d8d520b08c2f86be4a5d65742a94a1bf764f18f92d46164be823" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.669002 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.690401 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7fm2c"] Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.691923 4900 scope.go:117] "RemoveContainer" containerID="ed0c22841b6189789c41debe553dc5f8281fde4e855b77fe65418644a6d98510" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.747604 4900 scope.go:117] "RemoveContainer" containerID="59efb499a9c16988eff6dba9a5dba78b66eb7c4c3d6ebcbe8a168195aed97ff0" Dec 02 16:18:51 crc kubenswrapper[4900]: I1202 16:18:51.911065 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:18:51 crc kubenswrapper[4900]: E1202 16:18:51.911328 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:18:52 crc kubenswrapper[4900]: I1202 16:18:52.923686 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" path="/var/lib/kubelet/pods/8d5c6da4-dfc6-43be-9124-6107cc309880/volumes" Dec 02 16:19:06 crc kubenswrapper[4900]: I1202 16:19:06.911210 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:19:06 crc kubenswrapper[4900]: E1202 16:19:06.912021 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:19:17 crc kubenswrapper[4900]: I1202 16:19:17.910841 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:19:17 crc kubenswrapper[4900]: E1202 16:19:17.912033 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:19:30 crc kubenswrapper[4900]: I1202 16:19:30.910119 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:19:30 crc kubenswrapper[4900]: E1202 16:19:30.910845 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:19:43 crc kubenswrapper[4900]: I1202 16:19:43.911172 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:19:43 crc kubenswrapper[4900]: E1202 16:19:43.912460 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:19:44 crc kubenswrapper[4900]: E1202 16:19:44.260221 4900 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.130:38054->38.102.83.130:46203: write tcp 38.102.83.130:38054->38.102.83.130:46203: write: broken pipe Dec 02 16:19:58 crc kubenswrapper[4900]: I1202 16:19:58.911523 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:19:58 crc kubenswrapper[4900]: E1202 16:19:58.912460 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:20:11 crc kubenswrapper[4900]: I1202 16:20:11.910940 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:20:11 crc kubenswrapper[4900]: E1202 16:20:11.912182 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.596244 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vh769"] Dec 02 16:20:19 crc kubenswrapper[4900]: E1202 16:20:19.597860 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="extract-content" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.597912 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="extract-content" Dec 02 16:20:19 crc kubenswrapper[4900]: E1202 16:20:19.598008 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="registry-server" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.598023 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="registry-server" Dec 02 16:20:19 crc kubenswrapper[4900]: E1202 16:20:19.598108 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="extract-utilities" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.598125 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="extract-utilities" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.600801 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5c6da4-dfc6-43be-9124-6107cc309880" containerName="registry-server" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.643596 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.661749 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vh769"] Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.826207 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-catalog-content\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.826783 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-utilities\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.826838 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gdl\" (UniqueName: \"kubernetes.io/projected/9da755df-d9b7-408b-bb58-c88cd747e466-kube-api-access-k6gdl\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.929148 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-utilities\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.929234 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6gdl\" (UniqueName: \"kubernetes.io/projected/9da755df-d9b7-408b-bb58-c88cd747e466-kube-api-access-k6gdl\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.929499 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-catalog-content\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.929608 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-utilities\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:19 crc kubenswrapper[4900]: I1202 16:20:19.929924 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-catalog-content\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:20 crc kubenswrapper[4900]: I1202 16:20:20.559104 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6gdl\" (UniqueName: \"kubernetes.io/projected/9da755df-d9b7-408b-bb58-c88cd747e466-kube-api-access-k6gdl\") pod \"certified-operators-vh769\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:20 crc kubenswrapper[4900]: I1202 16:20:20.581458 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:21 crc kubenswrapper[4900]: I1202 16:20:21.087410 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vh769"] Dec 02 16:20:21 crc kubenswrapper[4900]: I1202 16:20:21.876677 4900 generic.go:334] "Generic (PLEG): container finished" podID="9da755df-d9b7-408b-bb58-c88cd747e466" containerID="3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98" exitCode=0 Dec 02 16:20:21 crc kubenswrapper[4900]: I1202 16:20:21.877036 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerDied","Data":"3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98"} Dec 02 16:20:21 crc kubenswrapper[4900]: I1202 16:20:21.877066 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerStarted","Data":"5a49c44dc61546e2d73d03858a3bda249b4ddcf8a4d458e0e2413931b18d9f7a"} Dec 02 16:20:21 crc kubenswrapper[4900]: I1202 16:20:21.879232 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 16:20:22 crc kubenswrapper[4900]: I1202 16:20:22.913480 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:20:22 crc kubenswrapper[4900]: E1202 16:20:22.913974 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:20:23 crc kubenswrapper[4900]: I1202 16:20:23.905233 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerStarted","Data":"e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998"} Dec 02 16:20:24 crc kubenswrapper[4900]: I1202 16:20:24.547726 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Dec 02 16:20:24 crc kubenswrapper[4900]: I1202 16:20:24.548335 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="fde68498-38c9-4808-ae8e-91dde7a6d2c9" containerName="adoption" containerID="cri-o://e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe" gracePeriod=30 Dec 02 16:20:24 crc kubenswrapper[4900]: I1202 16:20:24.918248 4900 generic.go:334] "Generic (PLEG): container finished" podID="9da755df-d9b7-408b-bb58-c88cd747e466" containerID="e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998" exitCode=0 Dec 02 16:20:24 crc kubenswrapper[4900]: I1202 16:20:24.929240 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerDied","Data":"e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998"} Dec 02 16:20:25 crc kubenswrapper[4900]: I1202 16:20:25.928365 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerStarted","Data":"671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4"} Dec 02 16:20:25 crc kubenswrapper[4900]: I1202 16:20:25.950382 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vh769" podStartSLOduration=3.221350672 podStartE2EDuration="6.950363481s" podCreationTimestamp="2025-12-02 16:20:19 +0000 UTC" firstStartedPulling="2025-12-02 16:20:21.878913196 +0000 UTC m=+9467.294727067" lastFinishedPulling="2025-12-02 16:20:25.607926015 +0000 UTC m=+9471.023739876" observedRunningTime="2025-12-02 16:20:25.948331543 +0000 UTC m=+9471.364145394" watchObservedRunningTime="2025-12-02 16:20:25.950363481 +0000 UTC m=+9471.366177332" Dec 02 16:20:30 crc kubenswrapper[4900]: I1202 16:20:30.582032 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:30 crc kubenswrapper[4900]: I1202 16:20:30.582729 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:30 crc kubenswrapper[4900]: I1202 16:20:30.628900 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:31 crc kubenswrapper[4900]: I1202 16:20:31.037293 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:31 crc kubenswrapper[4900]: I1202 16:20:31.091224 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vh769"] Dec 02 16:20:32 crc kubenswrapper[4900]: I1202 16:20:32.999288 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vh769" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="registry-server" containerID="cri-o://671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4" gracePeriod=2 Dec 02 16:20:33 crc kubenswrapper[4900]: I1202 16:20:33.564278 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:33 crc kubenswrapper[4900]: I1202 16:20:33.759036 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6gdl\" (UniqueName: \"kubernetes.io/projected/9da755df-d9b7-408b-bb58-c88cd747e466-kube-api-access-k6gdl\") pod \"9da755df-d9b7-408b-bb58-c88cd747e466\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " Dec 02 16:20:33 crc kubenswrapper[4900]: I1202 16:20:33.759108 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-utilities\") pod \"9da755df-d9b7-408b-bb58-c88cd747e466\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " Dec 02 16:20:33 crc kubenswrapper[4900]: I1202 16:20:33.759329 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-catalog-content\") pod \"9da755df-d9b7-408b-bb58-c88cd747e466\" (UID: \"9da755df-d9b7-408b-bb58-c88cd747e466\") " Dec 02 16:20:33 crc kubenswrapper[4900]: I1202 16:20:33.770772 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-utilities" (OuterVolumeSpecName: "utilities") pod "9da755df-d9b7-408b-bb58-c88cd747e466" (UID: "9da755df-d9b7-408b-bb58-c88cd747e466"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:20:33 crc kubenswrapper[4900]: I1202 16:20:33.898392 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.013444 4900 generic.go:334] "Generic (PLEG): container finished" podID="9da755df-d9b7-408b-bb58-c88cd747e466" containerID="671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4" exitCode=0 Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.013502 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerDied","Data":"671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4"} Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.013540 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vh769" event={"ID":"9da755df-d9b7-408b-bb58-c88cd747e466","Type":"ContainerDied","Data":"5a49c44dc61546e2d73d03858a3bda249b4ddcf8a4d458e0e2413931b18d9f7a"} Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.013570 4900 scope.go:117] "RemoveContainer" containerID="671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.013887 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vh769" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.041619 4900 scope.go:117] "RemoveContainer" containerID="e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.791567 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da755df-d9b7-408b-bb58-c88cd747e466-kube-api-access-k6gdl" (OuterVolumeSpecName: "kube-api-access-k6gdl") pod "9da755df-d9b7-408b-bb58-c88cd747e466" (UID: "9da755df-d9b7-408b-bb58-c88cd747e466"). InnerVolumeSpecName "kube-api-access-k6gdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.814777 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:20:34 crc kubenswrapper[4900]: E1202 16:20:34.815319 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.833246 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6gdl\" (UniqueName: \"kubernetes.io/projected/9da755df-d9b7-408b-bb58-c88cd747e466-kube-api-access-k6gdl\") on node \"crc\" DevicePath \"\"" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.848398 4900 scope.go:117] "RemoveContainer" containerID="3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.872061 4900 scope.go:117] "RemoveContainer" containerID="671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4" Dec 02 16:20:34 crc kubenswrapper[4900]: E1202 16:20:34.872441 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4\": container with ID starting with 671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4 not found: ID does not exist" containerID="671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.872477 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4"} err="failed to get container status \"671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4\": rpc error: code = NotFound desc = could not find container \"671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4\": container with ID starting with 671e299881afbd182b6923cd936f4a74fecf9a8e9e71d5c91a642112e48bdef4 not found: ID does not exist" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.872497 4900 scope.go:117] "RemoveContainer" containerID="e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998" Dec 02 16:20:34 crc kubenswrapper[4900]: E1202 16:20:34.872729 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998\": container with ID starting with e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998 not found: ID does not exist" containerID="e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.872752 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998"} err="failed to get container status \"e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998\": rpc error: code = NotFound desc = could not find container \"e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998\": container with ID starting with e51a17fbdd685441a49c1cc19957bd95acb0079f3277220d4b4358eb0b36e998 not found: ID does not exist" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.872766 4900 scope.go:117] "RemoveContainer" containerID="3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98" Dec 02 16:20:34 crc kubenswrapper[4900]: E1202 16:20:34.872948 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98\": container with ID starting with 3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98 not found: ID does not exist" containerID="3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98" Dec 02 16:20:34 crc kubenswrapper[4900]: I1202 16:20:34.872970 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98"} err="failed to get container status \"3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98\": rpc error: code = NotFound desc = could not find container \"3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98\": container with ID starting with 3dc300e7f1d2c88c47e540f724030e18ff82eded6df79c893d00910bf677fd98 not found: ID does not exist" Dec 02 16:20:35 crc kubenswrapper[4900]: I1202 16:20:35.100593 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9da755df-d9b7-408b-bb58-c88cd747e466" (UID: "9da755df-d9b7-408b-bb58-c88cd747e466"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:20:35 crc kubenswrapper[4900]: I1202 16:20:35.146257 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da755df-d9b7-408b-bb58-c88cd747e466-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:20:35 crc kubenswrapper[4900]: I1202 16:20:35.272511 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vh769"] Dec 02 16:20:35 crc kubenswrapper[4900]: I1202 16:20:35.286016 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vh769"] Dec 02 16:20:36 crc kubenswrapper[4900]: I1202 16:20:36.931316 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" path="/var/lib/kubelet/pods/9da755df-d9b7-408b-bb58-c88cd747e466/volumes" Dec 02 16:20:49 crc kubenswrapper[4900]: I1202 16:20:49.910788 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:20:50 crc kubenswrapper[4900]: I1202 16:20:50.212334 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"b35923b84528e8920ff24f880be81390411327963f57031dc0498139c3371f37"} Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.055923 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.211892 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") pod \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.212227 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgj9r\" (UniqueName: \"kubernetes.io/projected/fde68498-38c9-4808-ae8e-91dde7a6d2c9-kube-api-access-fgj9r\") pod \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\" (UID: \"fde68498-38c9-4808-ae8e-91dde7a6d2c9\") " Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.217919 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde68498-38c9-4808-ae8e-91dde7a6d2c9-kube-api-access-fgj9r" (OuterVolumeSpecName: "kube-api-access-fgj9r") pod "fde68498-38c9-4808-ae8e-91dde7a6d2c9" (UID: "fde68498-38c9-4808-ae8e-91dde7a6d2c9"). InnerVolumeSpecName "kube-api-access-fgj9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.234541 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2" (OuterVolumeSpecName: "mariadb-data") pod "fde68498-38c9-4808-ae8e-91dde7a6d2c9" (UID: "fde68498-38c9-4808-ae8e-91dde7a6d2c9"). InnerVolumeSpecName "pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.272306 4900 generic.go:334] "Generic (PLEG): container finished" podID="fde68498-38c9-4808-ae8e-91dde7a6d2c9" containerID="e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe" exitCode=137 Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.272367 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fde68498-38c9-4808-ae8e-91dde7a6d2c9","Type":"ContainerDied","Data":"e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe"} Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.272405 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fde68498-38c9-4808-ae8e-91dde7a6d2c9","Type":"ContainerDied","Data":"fe58b1aaced4f9d225c6ba628b17352494262d2a5fee85d5450308521241d4ae"} Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.272457 4900 scope.go:117] "RemoveContainer" containerID="e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.272691 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.317030 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgj9r\" (UniqueName: \"kubernetes.io/projected/fde68498-38c9-4808-ae8e-91dde7a6d2c9-kube-api-access-fgj9r\") on node \"crc\" DevicePath \"\"" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.317314 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") on node \"crc\" " Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.382534 4900 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.383319 4900 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2") on node "crc" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.389878 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.397143 4900 scope.go:117] "RemoveContainer" containerID="e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe" Dec 02 16:20:55 crc kubenswrapper[4900]: E1202 16:20:55.397745 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe\": container with ID starting with e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe not found: ID does not exist" containerID="e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.397813 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe"} err="failed to get container status \"e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe\": rpc error: code = NotFound desc = could not find container \"e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe\": container with ID starting with e6fbcf58eb7442eaab95afabb95f1c2b56efb3f0dcc48d97b15fab1c80881ebe not found: ID does not exist" Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.398836 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Dec 02 16:20:55 crc kubenswrapper[4900]: I1202 16:20:55.419157 4900 reconciler_common.go:293] "Volume detached for volume \"pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aeedf39-ad80-4de2-baa8-7ed0a29ce5a2\") on node \"crc\" DevicePath \"\"" Dec 02 16:20:56 crc kubenswrapper[4900]: I1202 16:20:56.202850 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Dec 02 16:20:56 crc kubenswrapper[4900]: I1202 16:20:56.203323 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" containerName="adoption" containerID="cri-o://81f30ad27085d16c67c11e1814f8c7d6a68da7aff2b6ad120cf108a8b4157342" gracePeriod=30 Dec 02 16:20:56 crc kubenswrapper[4900]: I1202 16:20:56.932191 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde68498-38c9-4808-ae8e-91dde7a6d2c9" path="/var/lib/kubelet/pods/fde68498-38c9-4808-ae8e-91dde7a6d2c9/volumes" Dec 02 16:21:26 crc kubenswrapper[4900]: I1202 16:21:26.695893 4900 generic.go:334] "Generic (PLEG): container finished" podID="52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" containerID="81f30ad27085d16c67c11e1814f8c7d6a68da7aff2b6ad120cf108a8b4157342" exitCode=137 Dec 02 16:21:26 crc kubenswrapper[4900]: I1202 16:21:26.696376 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8","Type":"ContainerDied","Data":"81f30ad27085d16c67c11e1814f8c7d6a68da7aff2b6ad120cf108a8b4157342"} Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.416274 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.576067 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jltxd\" (UniqueName: \"kubernetes.io/projected/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-kube-api-access-jltxd\") pod \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.576257 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-ovn-data-cert\") pod \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.577063 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") pod \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\" (UID: \"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8\") " Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.583238 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-kube-api-access-jltxd" (OuterVolumeSpecName: "kube-api-access-jltxd") pod "52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" (UID: "52a18c77-17d0-4f5d-b98d-b5c9947d0ed8"). InnerVolumeSpecName "kube-api-access-jltxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.590755 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" (UID: "52a18c77-17d0-4f5d-b98d-b5c9947d0ed8"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.612894 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8" (OuterVolumeSpecName: "ovn-data") pod "52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" (UID: "52a18c77-17d0-4f5d-b98d-b5c9947d0ed8"). InnerVolumeSpecName "pvc-354967a5-67ca-400e-8d2a-8cae430124e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.681209 4900 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.681360 4900 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-354967a5-67ca-400e-8d2a-8cae430124e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") on node \"crc\" " Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.681413 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jltxd\" (UniqueName: \"kubernetes.io/projected/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8-kube-api-access-jltxd\") on node \"crc\" DevicePath \"\"" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.707955 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"52a18c77-17d0-4f5d-b98d-b5c9947d0ed8","Type":"ContainerDied","Data":"57cafdfd771f75697c82eb71ea72d1c5cfbe10da313477909be89533bd1ab685"} Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.708006 4900 scope.go:117] "RemoveContainer" containerID="81f30ad27085d16c67c11e1814f8c7d6a68da7aff2b6ad120cf108a8b4157342" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.708108 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.713504 4900 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.713696 4900 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-354967a5-67ca-400e-8d2a-8cae430124e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8") on node "crc" Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.744133 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.753240 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Dec 02 16:21:27 crc kubenswrapper[4900]: I1202 16:21:27.782729 4900 reconciler_common.go:293] "Volume detached for volume \"pvc-354967a5-67ca-400e-8d2a-8cae430124e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-354967a5-67ca-400e-8d2a-8cae430124e8\") on node \"crc\" DevicePath \"\"" Dec 02 16:21:28 crc kubenswrapper[4900]: I1202 16:21:28.921951 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" path="/var/lib/kubelet/pods/52a18c77-17d0-4f5d-b98d-b5c9947d0ed8/volumes" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.107091 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hgg9b/must-gather-n49lx"] Dec 02 16:22:35 crc kubenswrapper[4900]: E1202 16:22:35.107977 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="registry-server" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.107989 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="registry-server" Dec 02 16:22:35 crc kubenswrapper[4900]: E1202 16:22:35.108006 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" containerName="adoption" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108011 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" containerName="adoption" Dec 02 16:22:35 crc kubenswrapper[4900]: E1202 16:22:35.108024 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="extract-utilities" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108033 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="extract-utilities" Dec 02 16:22:35 crc kubenswrapper[4900]: E1202 16:22:35.108048 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="extract-content" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108054 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="extract-content" Dec 02 16:22:35 crc kubenswrapper[4900]: E1202 16:22:35.108078 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde68498-38c9-4808-ae8e-91dde7a6d2c9" containerName="adoption" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108084 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde68498-38c9-4808-ae8e-91dde7a6d2c9" containerName="adoption" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108271 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da755df-d9b7-408b-bb58-c88cd747e466" containerName="registry-server" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108281 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde68498-38c9-4808-ae8e-91dde7a6d2c9" containerName="adoption" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.108297 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a18c77-17d0-4f5d-b98d-b5c9947d0ed8" containerName="adoption" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.109374 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.111365 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hgg9b"/"default-dockercfg-5hvcq" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.111683 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hgg9b"/"kube-root-ca.crt" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.111683 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hgg9b"/"openshift-service-ca.crt" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.119271 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hgg9b/must-gather-n49lx"] Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.226392 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cea334bc-bbda-43e6-999e-1d1827b0b880-must-gather-output\") pod \"must-gather-n49lx\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.226595 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/cea334bc-bbda-43e6-999e-1d1827b0b880-kube-api-access-jj4lr\") pod \"must-gather-n49lx\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.329476 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cea334bc-bbda-43e6-999e-1d1827b0b880-must-gather-output\") pod \"must-gather-n49lx\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.329577 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/cea334bc-bbda-43e6-999e-1d1827b0b880-kube-api-access-jj4lr\") pod \"must-gather-n49lx\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.330033 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cea334bc-bbda-43e6-999e-1d1827b0b880-must-gather-output\") pod \"must-gather-n49lx\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.350556 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/cea334bc-bbda-43e6-999e-1d1827b0b880-kube-api-access-jj4lr\") pod \"must-gather-n49lx\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.432325 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:22:35 crc kubenswrapper[4900]: I1202 16:22:35.835999 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hgg9b/must-gather-n49lx"] Dec 02 16:22:36 crc kubenswrapper[4900]: I1202 16:22:36.585824 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/must-gather-n49lx" event={"ID":"cea334bc-bbda-43e6-999e-1d1827b0b880","Type":"ContainerStarted","Data":"fcc220ad1d70ee770a01ff7cfb3eaec295b0ef224df93ee78a4033376867b105"} Dec 02 16:22:41 crc kubenswrapper[4900]: I1202 16:22:41.647136 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/must-gather-n49lx" event={"ID":"cea334bc-bbda-43e6-999e-1d1827b0b880","Type":"ContainerStarted","Data":"7bf57165c387cb78f920d2a37cc70a32b14e035c1640134e400fa6512e1ff6fa"} Dec 02 16:22:41 crc kubenswrapper[4900]: I1202 16:22:41.647748 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/must-gather-n49lx" event={"ID":"cea334bc-bbda-43e6-999e-1d1827b0b880","Type":"ContainerStarted","Data":"8c67ea529759be5701a5d647ea95dada365492676b30b5c893e7537bfb6614c9"} Dec 02 16:22:45 crc kubenswrapper[4900]: I1202 16:22:45.902530 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hgg9b/must-gather-n49lx" podStartSLOduration=6.331773352 podStartE2EDuration="10.90250264s" podCreationTimestamp="2025-12-02 16:22:35 +0000 UTC" firstStartedPulling="2025-12-02 16:22:35.83183202 +0000 UTC m=+9601.247645871" lastFinishedPulling="2025-12-02 16:22:40.402561308 +0000 UTC m=+9605.818375159" observedRunningTime="2025-12-02 16:22:41.672960672 +0000 UTC m=+9607.088774523" watchObservedRunningTime="2025-12-02 16:22:45.90250264 +0000 UTC m=+9611.318316531" Dec 02 16:22:45 crc kubenswrapper[4900]: I1202 16:22:45.908022 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hgg9b/crc-debug-v6rr4"] Dec 02 16:22:45 crc kubenswrapper[4900]: I1202 16:22:45.910344 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.022022 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsc9f\" (UniqueName: \"kubernetes.io/projected/69c751ab-ed46-49f9-9140-38c89c3e4203-kube-api-access-lsc9f\") pod \"crc-debug-v6rr4\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.022207 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c751ab-ed46-49f9-9140-38c89c3e4203-host\") pod \"crc-debug-v6rr4\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.124047 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsc9f\" (UniqueName: \"kubernetes.io/projected/69c751ab-ed46-49f9-9140-38c89c3e4203-kube-api-access-lsc9f\") pod \"crc-debug-v6rr4\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.124237 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c751ab-ed46-49f9-9140-38c89c3e4203-host\") pod \"crc-debug-v6rr4\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.124449 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c751ab-ed46-49f9-9140-38c89c3e4203-host\") pod \"crc-debug-v6rr4\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.151968 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsc9f\" (UniqueName: \"kubernetes.io/projected/69c751ab-ed46-49f9-9140-38c89c3e4203-kube-api-access-lsc9f\") pod \"crc-debug-v6rr4\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.238118 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:22:46 crc kubenswrapper[4900]: I1202 16:22:46.702035 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" event={"ID":"69c751ab-ed46-49f9-9140-38c89c3e4203","Type":"ContainerStarted","Data":"fbf78aeded77ff507ab6c3e2c12fcecc63bb0742260d2a2f188942c866ac8d4d"} Dec 02 16:22:58 crc kubenswrapper[4900]: I1202 16:22:58.866406 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" event={"ID":"69c751ab-ed46-49f9-9140-38c89c3e4203","Type":"ContainerStarted","Data":"a44540f66e3b3959eb0358560f542f32e363a52a086d6765ab58bf74e2540fea"} Dec 02 16:23:15 crc kubenswrapper[4900]: I1202 16:23:15.116777 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:23:15 crc kubenswrapper[4900]: I1202 16:23:15.117320 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:23:18 crc kubenswrapper[4900]: I1202 16:23:18.056111 4900 generic.go:334] "Generic (PLEG): container finished" podID="69c751ab-ed46-49f9-9140-38c89c3e4203" containerID="a44540f66e3b3959eb0358560f542f32e363a52a086d6765ab58bf74e2540fea" exitCode=0 Dec 02 16:23:18 crc kubenswrapper[4900]: I1202 16:23:18.056809 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" event={"ID":"69c751ab-ed46-49f9-9140-38c89c3e4203","Type":"ContainerDied","Data":"a44540f66e3b3959eb0358560f542f32e363a52a086d6765ab58bf74e2540fea"} Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.188875 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.234791 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hgg9b/crc-debug-v6rr4"] Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.243178 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hgg9b/crc-debug-v6rr4"] Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.278961 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c751ab-ed46-49f9-9140-38c89c3e4203-host\") pod \"69c751ab-ed46-49f9-9140-38c89c3e4203\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.279281 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsc9f\" (UniqueName: \"kubernetes.io/projected/69c751ab-ed46-49f9-9140-38c89c3e4203-kube-api-access-lsc9f\") pod \"69c751ab-ed46-49f9-9140-38c89c3e4203\" (UID: \"69c751ab-ed46-49f9-9140-38c89c3e4203\") " Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.279096 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69c751ab-ed46-49f9-9140-38c89c3e4203-host" (OuterVolumeSpecName: "host") pod "69c751ab-ed46-49f9-9140-38c89c3e4203" (UID: "69c751ab-ed46-49f9-9140-38c89c3e4203"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.280347 4900 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69c751ab-ed46-49f9-9140-38c89c3e4203-host\") on node \"crc\" DevicePath \"\"" Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.293994 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c751ab-ed46-49f9-9140-38c89c3e4203-kube-api-access-lsc9f" (OuterVolumeSpecName: "kube-api-access-lsc9f") pod "69c751ab-ed46-49f9-9140-38c89c3e4203" (UID: "69c751ab-ed46-49f9-9140-38c89c3e4203"). InnerVolumeSpecName "kube-api-access-lsc9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:23:19 crc kubenswrapper[4900]: I1202 16:23:19.382454 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsc9f\" (UniqueName: \"kubernetes.io/projected/69c751ab-ed46-49f9-9140-38c89c3e4203-kube-api-access-lsc9f\") on node \"crc\" DevicePath \"\"" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.082438 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf78aeded77ff507ab6c3e2c12fcecc63bb0742260d2a2f188942c866ac8d4d" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.082476 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-v6rr4" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.436666 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hgg9b/crc-debug-2p8nl"] Dec 02 16:23:20 crc kubenswrapper[4900]: E1202 16:23:20.437385 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c751ab-ed46-49f9-9140-38c89c3e4203" containerName="container-00" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.437403 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c751ab-ed46-49f9-9140-38c89c3e4203" containerName="container-00" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.437603 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c751ab-ed46-49f9-9140-38c89c3e4203" containerName="container-00" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.438402 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.619154 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f486ed7-9eee-442b-b802-62f89be2a53a-host\") pod \"crc-debug-2p8nl\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.619483 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9v4\" (UniqueName: \"kubernetes.io/projected/6f486ed7-9eee-442b-b802-62f89be2a53a-kube-api-access-6f9v4\") pod \"crc-debug-2p8nl\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.721918 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f486ed7-9eee-442b-b802-62f89be2a53a-host\") pod \"crc-debug-2p8nl\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.722105 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f486ed7-9eee-442b-b802-62f89be2a53a-host\") pod \"crc-debug-2p8nl\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.722265 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9v4\" (UniqueName: \"kubernetes.io/projected/6f486ed7-9eee-442b-b802-62f89be2a53a-kube-api-access-6f9v4\") pod \"crc-debug-2p8nl\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.749379 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9v4\" (UniqueName: \"kubernetes.io/projected/6f486ed7-9eee-442b-b802-62f89be2a53a-kube-api-access-6f9v4\") pod \"crc-debug-2p8nl\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.759746 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:20 crc kubenswrapper[4900]: I1202 16:23:20.934462 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c751ab-ed46-49f9-9140-38c89c3e4203" path="/var/lib/kubelet/pods/69c751ab-ed46-49f9-9140-38c89c3e4203/volumes" Dec 02 16:23:21 crc kubenswrapper[4900]: I1202 16:23:21.094994 4900 generic.go:334] "Generic (PLEG): container finished" podID="6f486ed7-9eee-442b-b802-62f89be2a53a" containerID="9c54527a8cfeb17c052fff27a84a5e1a1e45c724231c2843f5a87934939c8f3e" exitCode=1 Dec 02 16:23:21 crc kubenswrapper[4900]: I1202 16:23:21.095036 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" event={"ID":"6f486ed7-9eee-442b-b802-62f89be2a53a","Type":"ContainerDied","Data":"9c54527a8cfeb17c052fff27a84a5e1a1e45c724231c2843f5a87934939c8f3e"} Dec 02 16:23:21 crc kubenswrapper[4900]: I1202 16:23:21.095066 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" event={"ID":"6f486ed7-9eee-442b-b802-62f89be2a53a","Type":"ContainerStarted","Data":"fd0207e37cfdce6410bfbd01a6988c2a0bbc3ab4c56d1bf0919d84b4485e827a"} Dec 02 16:23:21 crc kubenswrapper[4900]: I1202 16:23:21.136988 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hgg9b/crc-debug-2p8nl"] Dec 02 16:23:21 crc kubenswrapper[4900]: I1202 16:23:21.151622 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hgg9b/crc-debug-2p8nl"] Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.206343 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.281633 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f486ed7-9eee-442b-b802-62f89be2a53a-host\") pod \"6f486ed7-9eee-442b-b802-62f89be2a53a\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.281694 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9v4\" (UniqueName: \"kubernetes.io/projected/6f486ed7-9eee-442b-b802-62f89be2a53a-kube-api-access-6f9v4\") pod \"6f486ed7-9eee-442b-b802-62f89be2a53a\" (UID: \"6f486ed7-9eee-442b-b802-62f89be2a53a\") " Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.281725 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f486ed7-9eee-442b-b802-62f89be2a53a-host" (OuterVolumeSpecName: "host") pod "6f486ed7-9eee-442b-b802-62f89be2a53a" (UID: "6f486ed7-9eee-442b-b802-62f89be2a53a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.282369 4900 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f486ed7-9eee-442b-b802-62f89be2a53a-host\") on node \"crc\" DevicePath \"\"" Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.288943 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f486ed7-9eee-442b-b802-62f89be2a53a-kube-api-access-6f9v4" (OuterVolumeSpecName: "kube-api-access-6f9v4") pod "6f486ed7-9eee-442b-b802-62f89be2a53a" (UID: "6f486ed7-9eee-442b-b802-62f89be2a53a"). InnerVolumeSpecName "kube-api-access-6f9v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.384203 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9v4\" (UniqueName: \"kubernetes.io/projected/6f486ed7-9eee-442b-b802-62f89be2a53a-kube-api-access-6f9v4\") on node \"crc\" DevicePath \"\"" Dec 02 16:23:22 crc kubenswrapper[4900]: I1202 16:23:22.923462 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f486ed7-9eee-442b-b802-62f89be2a53a" path="/var/lib/kubelet/pods/6f486ed7-9eee-442b-b802-62f89be2a53a/volumes" Dec 02 16:23:23 crc kubenswrapper[4900]: I1202 16:23:23.113691 4900 scope.go:117] "RemoveContainer" containerID="9c54527a8cfeb17c052fff27a84a5e1a1e45c724231c2843f5a87934939c8f3e" Dec 02 16:23:23 crc kubenswrapper[4900]: I1202 16:23:23.113714 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/crc-debug-2p8nl" Dec 02 16:23:45 crc kubenswrapper[4900]: I1202 16:23:45.116737 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:23:45 crc kubenswrapper[4900]: I1202 16:23:45.118611 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.116915 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.117747 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.117804 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.118839 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b35923b84528e8920ff24f880be81390411327963f57031dc0498139c3371f37"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.118904 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://b35923b84528e8920ff24f880be81390411327963f57031dc0498139c3371f37" gracePeriod=600 Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.777470 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="b35923b84528e8920ff24f880be81390411327963f57031dc0498139c3371f37" exitCode=0 Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.777559 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"b35923b84528e8920ff24f880be81390411327963f57031dc0498139c3371f37"} Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.778124 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3"} Dec 02 16:24:15 crc kubenswrapper[4900]: I1202 16:24:15.778148 4900 scope.go:117] "RemoveContainer" containerID="6a0b87d254f9a690976fc186f3e7204c0bc247983ab1c541987c0ded8452d44f" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.044305 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fq9bj"] Dec 02 16:26:02 crc kubenswrapper[4900]: E1202 16:26:02.045326 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f486ed7-9eee-442b-b802-62f89be2a53a" containerName="container-00" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.045341 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f486ed7-9eee-442b-b802-62f89be2a53a" containerName="container-00" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.045658 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f486ed7-9eee-442b-b802-62f89be2a53a" containerName="container-00" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.047358 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.074944 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fq9bj"] Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.172775 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-utilities\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.172979 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-catalog-content\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.173008 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4lg\" (UniqueName: \"kubernetes.io/projected/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-kube-api-access-6x4lg\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.274706 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-catalog-content\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.274764 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4lg\" (UniqueName: \"kubernetes.io/projected/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-kube-api-access-6x4lg\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.274835 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-utilities\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.275323 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-utilities\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.275562 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-catalog-content\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.761538 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4lg\" (UniqueName: \"kubernetes.io/projected/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-kube-api-access-6x4lg\") pod \"redhat-operators-fq9bj\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:02 crc kubenswrapper[4900]: I1202 16:26:02.992580 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:03 crc kubenswrapper[4900]: I1202 16:26:03.550357 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fq9bj"] Dec 02 16:26:04 crc kubenswrapper[4900]: I1202 16:26:04.191934 4900 generic.go:334] "Generic (PLEG): container finished" podID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerID="1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab" exitCode=0 Dec 02 16:26:04 crc kubenswrapper[4900]: I1202 16:26:04.192257 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerDied","Data":"1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab"} Dec 02 16:26:04 crc kubenswrapper[4900]: I1202 16:26:04.192286 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerStarted","Data":"a96d8bf653814f23317180b5385dcbe0f320dae9c0555a02c262be1f002da69b"} Dec 02 16:26:04 crc kubenswrapper[4900]: I1202 16:26:04.196547 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 16:26:06 crc kubenswrapper[4900]: I1202 16:26:06.225745 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerStarted","Data":"ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4"} Dec 02 16:26:09 crc kubenswrapper[4900]: I1202 16:26:09.284388 4900 generic.go:334] "Generic (PLEG): container finished" podID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerID="ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4" exitCode=0 Dec 02 16:26:09 crc kubenswrapper[4900]: I1202 16:26:09.284468 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerDied","Data":"ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4"} Dec 02 16:26:10 crc kubenswrapper[4900]: I1202 16:26:10.297041 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerStarted","Data":"d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717"} Dec 02 16:26:10 crc kubenswrapper[4900]: I1202 16:26:10.326829 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fq9bj" podStartSLOduration=2.818506079 podStartE2EDuration="8.326803689s" podCreationTimestamp="2025-12-02 16:26:02 +0000 UTC" firstStartedPulling="2025-12-02 16:26:04.196128468 +0000 UTC m=+9809.611942349" lastFinishedPulling="2025-12-02 16:26:09.704426108 +0000 UTC m=+9815.120239959" observedRunningTime="2025-12-02 16:26:10.319751528 +0000 UTC m=+9815.735565389" watchObservedRunningTime="2025-12-02 16:26:10.326803689 +0000 UTC m=+9815.742617560" Dec 02 16:26:12 crc kubenswrapper[4900]: I1202 16:26:12.993009 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:12 crc kubenswrapper[4900]: I1202 16:26:12.993423 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:14 crc kubenswrapper[4900]: I1202 16:26:14.038725 4900 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fq9bj" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="registry-server" probeResult="failure" output=< Dec 02 16:26:14 crc kubenswrapper[4900]: timeout: failed to connect service ":50051" within 1s Dec 02 16:26:14 crc kubenswrapper[4900]: > Dec 02 16:26:15 crc kubenswrapper[4900]: I1202 16:26:15.116409 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:26:15 crc kubenswrapper[4900]: I1202 16:26:15.116939 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:26:23 crc kubenswrapper[4900]: I1202 16:26:23.084767 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:23 crc kubenswrapper[4900]: I1202 16:26:23.174416 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:23 crc kubenswrapper[4900]: I1202 16:26:23.351614 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fq9bj"] Dec 02 16:26:24 crc kubenswrapper[4900]: I1202 16:26:24.528224 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fq9bj" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="registry-server" containerID="cri-o://d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717" gracePeriod=2 Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.146731 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.198533 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-utilities\") pod \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.198611 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4lg\" (UniqueName: \"kubernetes.io/projected/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-kube-api-access-6x4lg\") pod \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.198712 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-catalog-content\") pod \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\" (UID: \"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f\") " Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.199876 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-utilities" (OuterVolumeSpecName: "utilities") pod "cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" (UID: "cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.206164 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-kube-api-access-6x4lg" (OuterVolumeSpecName: "kube-api-access-6x4lg") pod "cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" (UID: "cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f"). InnerVolumeSpecName "kube-api-access-6x4lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.302021 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.302258 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4lg\" (UniqueName: \"kubernetes.io/projected/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-kube-api-access-6x4lg\") on node \"crc\" DevicePath \"\"" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.333421 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" (UID: "cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.404517 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.541690 4900 generic.go:334] "Generic (PLEG): container finished" podID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerID="d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717" exitCode=0 Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.541741 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerDied","Data":"d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717"} Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.541980 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq9bj" event={"ID":"cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f","Type":"ContainerDied","Data":"a96d8bf653814f23317180b5385dcbe0f320dae9c0555a02c262be1f002da69b"} Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.542003 4900 scope.go:117] "RemoveContainer" containerID="d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.541814 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq9bj" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.563217 4900 scope.go:117] "RemoveContainer" containerID="ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.587475 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fq9bj"] Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.588070 4900 scope.go:117] "RemoveContainer" containerID="1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.596715 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fq9bj"] Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.629236 4900 scope.go:117] "RemoveContainer" containerID="d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717" Dec 02 16:26:25 crc kubenswrapper[4900]: E1202 16:26:25.629596 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717\": container with ID starting with d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717 not found: ID does not exist" containerID="d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.629667 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717"} err="failed to get container status \"d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717\": rpc error: code = NotFound desc = could not find container \"d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717\": container with ID starting with d212b761e1cc3e0b715b012ebdfe0d9c1a6cf945af7a8c24d07a662532567717 not found: ID does not exist" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.629701 4900 scope.go:117] "RemoveContainer" containerID="ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4" Dec 02 16:26:25 crc kubenswrapper[4900]: E1202 16:26:25.629951 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4\": container with ID starting with ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4 not found: ID does not exist" containerID="ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.629979 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4"} err="failed to get container status \"ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4\": rpc error: code = NotFound desc = could not find container \"ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4\": container with ID starting with ef95f5afc68e9a3f95cd47def8538acafdf0aacd5e82a39df70f75df401d77b4 not found: ID does not exist" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.629997 4900 scope.go:117] "RemoveContainer" containerID="1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab" Dec 02 16:26:25 crc kubenswrapper[4900]: E1202 16:26:25.630262 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab\": container with ID starting with 1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab not found: ID does not exist" containerID="1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab" Dec 02 16:26:25 crc kubenswrapper[4900]: I1202 16:26:25.630294 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab"} err="failed to get container status \"1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab\": rpc error: code = NotFound desc = could not find container \"1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab\": container with ID starting with 1d8eee84841ef83dacec70953d3cd7dcd6ab65018307352b07475e6633afd9ab not found: ID does not exist" Dec 02 16:26:26 crc kubenswrapper[4900]: I1202 16:26:26.930513 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" path="/var/lib/kubelet/pods/cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f/volumes" Dec 02 16:26:45 crc kubenswrapper[4900]: I1202 16:26:45.117051 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:26:45 crc kubenswrapper[4900]: I1202 16:26:45.117503 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.341128 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c939e201-5539-4e52-a39a-758328ae3f19/init-config-reloader/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.547343 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c939e201-5539-4e52-a39a-758328ae3f19/init-config-reloader/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.563015 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c939e201-5539-4e52-a39a-758328ae3f19/alertmanager/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.641166 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c939e201-5539-4e52-a39a-758328ae3f19/config-reloader/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.748071 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9822e03f-8de7-4b82-8703-d7b868859bc1/aodh-api/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.834400 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9822e03f-8de7-4b82-8703-d7b868859bc1/aodh-evaluator/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.882702 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9822e03f-8de7-4b82-8703-d7b868859bc1/aodh-listener/0.log" Dec 02 16:27:09 crc kubenswrapper[4900]: I1202 16:27:09.921130 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9822e03f-8de7-4b82-8703-d7b868859bc1/aodh-notifier/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.066011 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bcc48fd58-tt2tj_a2a936b2-ff27-4972-8238-f4cc6b2e1b63/barbican-api/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.089852 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bcc48fd58-tt2tj_a2a936b2-ff27-4972-8238-f4cc6b2e1b63/barbican-api-log/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.250918 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cb9f45f4b-8bgkl_288c6474-d114-42d9-8030-43ca582fd106/barbican-keystone-listener/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.290961 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cb9f45f4b-8bgkl_288c6474-d114-42d9-8030-43ca582fd106/barbican-keystone-listener-log/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.444854 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-78cd9cf4b9-48dtr_4d8522c3-bc85-4af9-aab5-8e610f1af1e0/barbican-worker/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.484031 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-78cd9cf4b9-48dtr_4d8522c3-bc85-4af9-aab5-8e610f1af1e0/barbican-worker-log/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.602125 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-59jqm_c354e156-0a05-4523-85a4-ce5d110c449a/bootstrap-openstack-openstack-cell1/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.691624 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe/ceilometer-central-agent/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.821032 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe/proxy-httpd/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.856396 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe/ceilometer-notification-agent/0.log" Dec 02 16:27:10 crc kubenswrapper[4900]: I1202 16:27:10.932176 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c24c627b-1ab7-4b1e-a1ec-bf182e9ef7fe/sg-core/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.069162 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-qvzjb_291b7d12-e918-405a-83a3-5c3fa5733f83/ceph-client-openstack-openstack-cell1/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.210978 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b99d250f-3bdb-4c35-af5d-3ff9d38bebde/cinder-api-log/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.259796 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b99d250f-3bdb-4c35-af5d-3ff9d38bebde/cinder-api/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.473825 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b2e331a7-7edb-4984-a486-00ff5463ca20/cinder-backup/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.693495 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b2e331a7-7edb-4984-a486-00ff5463ca20/probe/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.748275 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ee4e6e1c-8df1-4cec-be59-c6f7cb764f15/cinder-scheduler/0.log" Dec 02 16:27:11 crc kubenswrapper[4900]: I1202 16:27:11.793045 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ee4e6e1c-8df1-4cec-be59-c6f7cb764f15/probe/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.010843 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b8adcc13-3199-4c22-b50e-cb975a62c107/probe/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.049364 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b8adcc13-3199-4c22-b50e-cb975a62c107/cinder-volume/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.155365 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-kfvbg_2cce9cbb-e921-4a36-9a5c-dd9d077c33a5/configure-network-openstack-openstack-cell1/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.275349 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-lzpj4_655dba29-218e-4558-b61e-127bab45af83/configure-os-openstack-openstack-cell1/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.395322 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7799f47d95-mhbvd_6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4/init/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.522830 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7799f47d95-mhbvd_6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4/init/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.564569 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7799f47d95-mhbvd_6bcdaf0b-8b4d-4ed7-a539-9f8526601ff4/dnsmasq-dns/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.623583 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-7bk9z_85f28781-7762-4415-86d3-4c0b9bc08e2e/download-cache-openstack-openstack-cell1/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.727041 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_067e9541-7243-4e60-b233-1d180118c325/glance-httpd/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.770924 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_067e9541-7243-4e60-b233-1d180118c325/glance-log/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.825241 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c587812a-dffe-46fc-8407-a102214416f7/glance-httpd/0.log" Dec 02 16:27:12 crc kubenswrapper[4900]: I1202 16:27:12.840055 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c587812a-dffe-46fc-8407-a102214416f7/glance-log/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.067881 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7db49f5d4c-rjsh2_1e5907eb-6866-41f5-81c2-a15c5d1b7379/heat-api/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.138857 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5cc74477-qtk82_b7b3fd2e-0a49-42c3-a44e-cb79074ab660/heat-cfnapi/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.189457 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5fcb767956-nzh5b_8eca1564-298d-433d-b601-43980f0dcf0a/heat-engine/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.393153 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b86d75b6f-gp8ml_1bf781a9-0950-4d20-8ee9-9b4fa0305657/horizon-log/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.406376 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b86d75b6f-gp8ml_1bf781a9-0950-4d20-8ee9-9b4fa0305657/horizon/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.434465 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-fj8fw_b6f1eb22-77a6-4faf-a13f-a7e9d060360c/install-certs-openstack-openstack-cell1/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.598654 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-xc8qk_9da867a9-fdf9-413f-81d6-787700e0d41b/install-os-openstack-openstack-cell1/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.851923 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7944bbc6f9-jf4lg_f1416913-b691-42de-b4ff-b6266c6436d3/keystone-api/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.856301 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29411521-64bz4_defcc634-cea1-4350-9e5f-f714f766b6c8/keystone-cron/0.log" Dec 02 16:27:13 crc kubenswrapper[4900]: I1202 16:27:13.874148 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f608b559-0bd1-439d-b404-022ecd8de49f/kube-state-metrics/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.050398 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-gljkp_1e92ec1a-600c-4a89-b3ff-fa3ab471c0a5/libvirt-openstack-openstack-cell1/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.220169 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3a834bd3-12e9-44c7-8d70-24b29fd29ab1/manila-api/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.233540 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3a834bd3-12e9-44c7-8d70-24b29fd29ab1/manila-api-log/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.356592 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_56f52fb7-926a-4e18-b6d8-1455db37189a/manila-scheduler/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.415721 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_56f52fb7-926a-4e18-b6d8-1455db37189a/probe/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.518331 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_e755696b-ec67-463d-9d13-acb00739dec6/manila-share/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.596912 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_e755696b-ec67-463d-9d13-acb00739dec6/probe/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.953176 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64d9497c5c-zxhv8_7c81f731-76e5-4d22-ba31-c6fcaf3f699c/neutron-httpd/0.log" Dec 02 16:27:14 crc kubenswrapper[4900]: I1202 16:27:14.972227 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64d9497c5c-zxhv8_7c81f731-76e5-4d22-ba31-c6fcaf3f699c/neutron-api/0.log" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.117539 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-vhckw_22259223-6c4d-4df8-be51-5c59f0675b67/neutron-dhcp-openstack-openstack-cell1/0.log" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.120361 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.120416 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.120457 4900 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.121189 4900 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3"} pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.121235 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" containerID="cri-o://fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" gracePeriod=600 Dec 02 16:27:15 crc kubenswrapper[4900]: E1202 16:27:15.243916 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.333978 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-7djgq_3b0426b9-6b77-438a-a6a6-b71951425f1d/neutron-metadata-openstack-openstack-cell1/0.log" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.503484 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-28rxz_a0737da3-829f-4802-95aa-f42ae8546b75/neutron-sriov-openstack-openstack-cell1/0.log" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.702421 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_823a463c-6984-4621-ba0f-b9ad8b7f618c/nova-api-api/0.log" Dec 02 16:27:15 crc kubenswrapper[4900]: I1202 16:27:15.735024 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_823a463c-6984-4621-ba0f-b9ad8b7f618c/nova-api-log/0.log" Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.037586 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f76fdf10-cab6-4b9d-8484-82ed6b113f4f/nova-cell0-conductor-conductor/0.log" Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.180490 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_54127e6f-a1f3-4821-a5c3-2400546b2463/nova-cell1-conductor-conductor/0.log" Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.188871 4900 generic.go:334] "Generic (PLEG): container finished" podID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" exitCode=0 Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.188915 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerDied","Data":"fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3"} Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.188949 4900 scope.go:117] "RemoveContainer" containerID="b35923b84528e8920ff24f880be81390411327963f57031dc0498139c3371f37" Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.191662 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:27:16 crc kubenswrapper[4900]: E1202 16:27:16.193672 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:27:16 crc kubenswrapper[4900]: I1202 16:27:16.285503 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_12e426fe-b93c-46cf-979b-69fc1eff7684/nova-cell1-novncproxy-novncproxy/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.000061 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-prqhc_c26b8eb5-c400-47f5-ae09-765101884ea4/nova-cell1-openstack-openstack-cell1/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.006081 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellrtgns_a40ced8a-8021-4c6e-8381-4e587bdb7f04/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.309146 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_26c207c8-a3ae-4e45-bcdc-b3840b851d6d/nova-metadata-metadata/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.337106 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_26c207c8-a3ae-4e45-bcdc-b3840b851d6d/nova-metadata-log/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.509969 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-757f5bc974-vgzhx_71c00568-6d73-4684-ba1e-010757ff1e63/init/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.558690 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_83b06487-b1e6-4657-a114-280a21544af9/nova-scheduler-scheduler/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.711248 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-757f5bc974-vgzhx_71c00568-6d73-4684-ba1e-010757ff1e63/init/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.789659 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-757f5bc974-vgzhx_71c00568-6d73-4684-ba1e-010757ff1e63/octavia-api-provider-agent/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.934564 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jdx4z_53ade2fa-2048-4a3b-9035-c981bb812173/init/0.log" Dec 02 16:27:17 crc kubenswrapper[4900]: I1202 16:27:17.959021 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-757f5bc974-vgzhx_71c00568-6d73-4684-ba1e-010757ff1e63/octavia-api/0.log" Dec 02 16:27:18 crc kubenswrapper[4900]: I1202 16:27:18.768399 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jdx4z_53ade2fa-2048-4a3b-9035-c981bb812173/init/0.log" Dec 02 16:27:18 crc kubenswrapper[4900]: I1202 16:27:18.827856 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-5bzxq_820aa17f-6436-4bc1-a178-acdd1488fb13/init/0.log" Dec 02 16:27:18 crc kubenswrapper[4900]: I1202 16:27:18.903832 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jdx4z_53ade2fa-2048-4a3b-9035-c981bb812173/octavia-healthmanager/0.log" Dec 02 16:27:18 crc kubenswrapper[4900]: I1202 16:27:18.987185 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-5bzxq_820aa17f-6436-4bc1-a178-acdd1488fb13/init/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.023156 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-5bzxq_820aa17f-6436-4bc1-a178-acdd1488fb13/octavia-housekeeping/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.117861 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-svfh6_93db835a-7a0f-4e36-ab43-5696fa15fb07/init/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.322059 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-svfh6_93db835a-7a0f-4e36-ab43-5696fa15fb07/init/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.337189 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-svfh6_93db835a-7a0f-4e36-ab43-5696fa15fb07/octavia-amphora-httpd/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.348989 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-pbq7n_d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348/init/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.625416 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-pbq7n_d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348/init/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.645944 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-pbq7n_d9ce32f3-b59c-4cb2-9b5a-4a5336e9f348/octavia-rsyslog/0.log" Dec 02 16:27:19 crc kubenswrapper[4900]: I1202 16:27:19.938867 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-bkczn_a53b311e-3f6b-48aa-b306-72f3e26c4ce9/init/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.160433 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d82f787a-45fd-4580-bf19-12e6995493c1/mysql-bootstrap/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.161350 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-bkczn_a53b311e-3f6b-48aa-b306-72f3e26c4ce9/init/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.224678 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-bkczn_a53b311e-3f6b-48aa-b306-72f3e26c4ce9/octavia-worker/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.422337 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d82f787a-45fd-4580-bf19-12e6995493c1/galera/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.466757 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d82f787a-45fd-4580-bf19-12e6995493c1/mysql-bootstrap/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.494359 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_96bd872d-1991-4668-b075-19f9673dccd4/mysql-bootstrap/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.699720 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_05b43015-c93a-4b6b-9997-49dc52a8d84c/openstackclient/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.728164 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_96bd872d-1991-4668-b075-19f9673dccd4/mysql-bootstrap/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.755491 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_96bd872d-1991-4668-b075-19f9673dccd4/galera/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.968940 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-glzwh_07f3aa4d-40c4-45df-a374-1e2e908f7e3b/ovn-controller/0.log" Dec 02 16:27:20 crc kubenswrapper[4900]: I1202 16:27:20.971337 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8zt6j_f7ebaa7d-56ce-4de1-954e-c478bb64871c/openstack-network-exporter/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.204695 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2vsfp_40b95209-a40e-45db-bc19-a3ae870ce6ce/ovsdb-server-init/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.372611 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2vsfp_40b95209-a40e-45db-bc19-a3ae870ce6ce/ovsdb-server-init/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.373746 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2vsfp_40b95209-a40e-45db-bc19-a3ae870ce6ce/ovs-vswitchd/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.387516 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2vsfp_40b95209-a40e-45db-bc19-a3ae870ce6ce/ovsdb-server/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.578730 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09/openstack-network-exporter/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.610674 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0789e4f3-cc8f-4a96-b7a7-8adb1e4ffd09/ovn-northd/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.772398 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-twxzg_118b34e8-dda9-4e1c-9323-3e64eb19ac6d/ovn-openstack-openstack-cell1/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.812107 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4633b2f4-6b48-4454-87b9-cde7d972b4c4/openstack-network-exporter/0.log" Dec 02 16:27:21 crc kubenswrapper[4900]: I1202 16:27:21.857043 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4633b2f4-6b48-4454-87b9-cde7d972b4c4/ovsdbserver-nb/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.046098 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_22d5f722-205d-4bbc-867e-d730e746aaed/openstack-network-exporter/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.057501 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_22d5f722-205d-4bbc-867e-d730e746aaed/ovsdbserver-nb/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.220427 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_48b6a8cf-d942-4104-96e8-357ecf994aa2/openstack-network-exporter/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.269048 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_48b6a8cf-d942-4104-96e8-357ecf994aa2/ovsdbserver-nb/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.436084 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dde891a-1ff3-45cb-b977-7e3103cc4795/ovsdbserver-sb/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.488497 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dde891a-1ff3-45cb-b977-7e3103cc4795/openstack-network-exporter/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.611164 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_377fd1c0-f510-425f-b335-d38287d30c30/openstack-network-exporter/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.647287 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_377fd1c0-f510-425f-b335-d38287d30c30/ovsdbserver-sb/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.783099 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d86e01a9-36e7-4b06-af54-fa63c1587435/openstack-network-exporter/0.log" Dec 02 16:27:22 crc kubenswrapper[4900]: I1202 16:27:22.843136 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d86e01a9-36e7-4b06-af54-fa63c1587435/ovsdbserver-sb/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.047894 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cdf87577b-9k5x9_425acb33-bee4-4ad1-8c19-301cda4281de/placement-api/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.109442 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cdf87577b-9k5x9_425acb33-bee4-4ad1-8c19-301cda4281de/placement-log/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.173591 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cx9f9h_a6628b96-8eee-40d6-9219-3db27878b324/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.378534 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_35b0b67a-fdd7-47a5-8ee1-4a179bffaa84/init-config-reloader/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.496222 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_35b0b67a-fdd7-47a5-8ee1-4a179bffaa84/config-reloader/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.499554 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_35b0b67a-fdd7-47a5-8ee1-4a179bffaa84/init-config-reloader/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.598382 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_35b0b67a-fdd7-47a5-8ee1-4a179bffaa84/thanos-sidecar/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.599899 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_35b0b67a-fdd7-47a5-8ee1-4a179bffaa84/prometheus/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.738559 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8040323-8bc7-4ee9-bf46-c7f1499a653f/setup-container/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.943617 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8040323-8bc7-4ee9-bf46-c7f1499a653f/setup-container/0.log" Dec 02 16:27:23 crc kubenswrapper[4900]: I1202 16:27:23.966940 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8040323-8bc7-4ee9-bf46-c7f1499a653f/rabbitmq/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.046947 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d9af8210-7aca-4a64-96f3-17906daaef91/setup-container/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.165394 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d9af8210-7aca-4a64-96f3-17906daaef91/setup-container/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.307531 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-d6qss_eaac2118-0f22-4b6f-a877-b5cd1ff3d60b/reboot-os-openstack-openstack-cell1/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.450218 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d9af8210-7aca-4a64-96f3-17906daaef91/rabbitmq/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.478369 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-4fnz6_0f3377fb-c012-4e57-b165-5b9848f46ac1/run-os-openstack-openstack-cell1/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.672188 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b7187b9a-4969-4f77-b03e-51d588486060/memcached/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.678325 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-dxvcd_63024711-d35e-4023-82d6-66e310453c12/ssh-known-hosts-openstack/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.730052 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-njxk6_4014ba6f-1f86-42af-8f44-4ce633ea6288/telemetry-openstack-openstack-cell1/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.898576 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-vxnpg_8a8021fa-4038-4f47-ac57-f800a48e293a/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Dec 02 16:27:24 crc kubenswrapper[4900]: I1202 16:27:24.955707 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-hdjlm_103022c1-b49a-4eeb-80ea-464de2ca78dc/validate-network-openstack-openstack-cell1/0.log" Dec 02 16:27:27 crc kubenswrapper[4900]: I1202 16:27:27.911763 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:27:27 crc kubenswrapper[4900]: E1202 16:27:27.913971 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:27:38 crc kubenswrapper[4900]: I1202 16:27:38.910422 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:27:38 crc kubenswrapper[4900]: E1202 16:27:38.911160 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:27:48 crc kubenswrapper[4900]: I1202 16:27:48.354849 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/util/0.log" Dec 02 16:27:48 crc kubenswrapper[4900]: I1202 16:27:48.851064 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/util/0.log" Dec 02 16:27:48 crc kubenswrapper[4900]: I1202 16:27:48.872941 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/pull/0.log" Dec 02 16:27:48 crc kubenswrapper[4900]: I1202 16:27:48.912255 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/pull/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.086930 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/pull/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.113879 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/util/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.152885 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd29fdm7_1f6d0f83-6d8d-433f-a31a-2f204c4c8c18/extract/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.285002 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cdb56_6c51856b-78db-4067-aec4-bdbb2513d6d3/kube-rbac-proxy/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.371317 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cdb56_6c51856b-78db-4067-aec4-bdbb2513d6d3/manager/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.372894 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55f4dbb9b7-snhnc_90f13638-1211-4cdc-9c96-298ae112e911/kube-rbac-proxy/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.564583 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55f4dbb9b7-snhnc_90f13638-1211-4cdc-9c96-298ae112e911/manager/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.573518 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-79n4c_60b341d0-be93-4332-8eb3-356d0a0b4ee4/kube-rbac-proxy/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.641629 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-79n4c_60b341d0-be93-4332-8eb3-356d0a0b4ee4/manager/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.796323 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-z9xzn_6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495/kube-rbac-proxy/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.938431 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-z9xzn_6ad04c0c-8b90-4f63-8ff3-8afe8f1d2495/manager/0.log" Dec 02 16:27:49 crc kubenswrapper[4900]: I1202 16:27:49.968582 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-l8dqg_76472776-56db-440f-a0a5-5a45eaa83baa/kube-rbac-proxy/0.log" Dec 02 16:27:50 crc kubenswrapper[4900]: I1202 16:27:50.045177 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-l8dqg_76472776-56db-440f-a0a5-5a45eaa83baa/manager/0.log" Dec 02 16:27:50 crc kubenswrapper[4900]: I1202 16:27:50.169189 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-htb7b_c6afcb92-eb6f-4615-8b25-bcdc77eda80e/kube-rbac-proxy/0.log" Dec 02 16:27:50 crc kubenswrapper[4900]: I1202 16:27:50.176148 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-htb7b_c6afcb92-eb6f-4615-8b25-bcdc77eda80e/manager/0.log" Dec 02 16:27:50 crc kubenswrapper[4900]: I1202 16:27:50.780318 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-6479q_2648e9ae-b5db-4196-a921-5a708baae84d/kube-rbac-proxy/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.045627 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-77vc7_a79b1912-6054-4cc9-a584-c7e3e6ca9a31/kube-rbac-proxy/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.121345 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-77vc7_a79b1912-6054-4cc9-a584-c7e3e6ca9a31/manager/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.174352 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-6479q_2648e9ae-b5db-4196-a921-5a708baae84d/manager/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.354419 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-r8t7n_9127ec85-11f3-4526-bda2-884648292518/kube-rbac-proxy/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.417569 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-r8t7n_9127ec85-11f3-4526-bda2-884648292518/manager/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.495781 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-jkzbj_a6ef169a-4706-4704-bc8a-4afe5a1d4ac9/kube-rbac-proxy/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.652582 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-jkzbj_a6ef169a-4706-4704-bc8a-4afe5a1d4ac9/manager/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.700513 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-cwkch_98c32660-966d-43a1-932d-4ca2af418bf5/kube-rbac-proxy/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.744323 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-cwkch_98c32660-966d-43a1-932d-4ca2af418bf5/manager/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.897419 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ksgnh_b45635de-34ee-4361-b519-a12f95d3849b/kube-rbac-proxy/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.909797 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:27:51 crc kubenswrapper[4900]: E1202 16:27:51.910198 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.953288 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ksgnh_b45635de-34ee-4361-b519-a12f95d3849b/manager/0.log" Dec 02 16:27:51 crc kubenswrapper[4900]: I1202 16:27:51.965827 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7h9xf_6a464001-7dd2-4485-ba44-3c1dcd166c05/kube-rbac-proxy/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.176437 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vk8cv_f67eee76-7e8d-4b82-aa0a-b5a8600de493/manager/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.180606 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vk8cv_f67eee76-7e8d-4b82-aa0a-b5a8600de493/kube-rbac-proxy/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.355791 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7h9xf_6a464001-7dd2-4485-ba44-3c1dcd166c05/manager/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.375016 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls_58fb2457-4246-4898-98d3-c33292975d8e/kube-rbac-proxy/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.431847 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4mvvls_58fb2457-4246-4898-98d3-c33292975d8e/manager/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.708108 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hz9dn_c9f91c8d-4840-4c3e-9a19-2e9e64a76f60/registry-server/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.797098 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-84d58866d9-pxwtk_f5a97b31-1461-4708-a9ea-373711c869c3/operator/0.log" Dec 02 16:27:52 crc kubenswrapper[4900]: I1202 16:27:52.864427 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-78lhr_7901b678-edf0-4df9-8896-c596d2eab813/kube-rbac-proxy/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.033597 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-78lhr_7901b678-edf0-4df9-8896-c596d2eab813/manager/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.124879 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-hh8s2_a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18/kube-rbac-proxy/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.134993 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-hh8s2_a4bbc01f-bf31-4d5b-ae5a-197bb92d1a18/manager/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.357434 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-4klhn_e370b52c-c5be-4584-ada8-183e5d79e1f5/kube-rbac-proxy/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.467295 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mdwwd_ceece6b3-6e91-4afc-9f75-604473b84a44/operator/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.565586 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-4klhn_e370b52c-c5be-4584-ada8-183e5d79e1f5/manager/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.621148 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-s8q5z_da380dce-d4c5-41ed-8273-648f6ad79d43/kube-rbac-proxy/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.792758 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h6jdq_b2312412-3b86-40c1-9cf8-32d59d3a3a4e/kube-rbac-proxy/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.857558 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-h6jdq_b2312412-3b86-40c1-9cf8-32d59d3a3a4e/manager/0.log" Dec 02 16:27:53 crc kubenswrapper[4900]: I1202 16:27:53.893566 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-s8q5z_da380dce-d4c5-41ed-8273-648f6ad79d43/manager/0.log" Dec 02 16:27:54 crc kubenswrapper[4900]: I1202 16:27:54.019131 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-2pspv_eab7da61-f654-4f78-8dfa-4ede5002df86/kube-rbac-proxy/0.log" Dec 02 16:27:54 crc kubenswrapper[4900]: I1202 16:27:54.086430 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-2pspv_eab7da61-f654-4f78-8dfa-4ede5002df86/manager/0.log" Dec 02 16:27:54 crc kubenswrapper[4900]: I1202 16:27:54.788889 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58cd586464-f64kd_bb6f8bf1-8305-460b-94d3-208e68ad6f52/manager/0.log" Dec 02 16:28:06 crc kubenswrapper[4900]: I1202 16:28:06.910779 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:28:06 crc kubenswrapper[4900]: E1202 16:28:06.911675 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:28:16 crc kubenswrapper[4900]: I1202 16:28:16.027593 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n7qvb_c9bd8a2c-57b4-40b4-b931-16496b5236a0/control-plane-machine-set-operator/0.log" Dec 02 16:28:16 crc kubenswrapper[4900]: I1202 16:28:16.211694 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xcml8_aaa314e8-a902-4ab4-85ad-550d03c8a91d/kube-rbac-proxy/0.log" Dec 02 16:28:16 crc kubenswrapper[4900]: I1202 16:28:16.212097 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xcml8_aaa314e8-a902-4ab4-85ad-550d03c8a91d/machine-api-operator/0.log" Dec 02 16:28:20 crc kubenswrapper[4900]: I1202 16:28:20.910432 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:28:20 crc kubenswrapper[4900]: E1202 16:28:20.911582 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:28:30 crc kubenswrapper[4900]: I1202 16:28:30.725532 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-mv79b_0b8dc943-2813-4eb2-8c2e-741f60ba09df/cert-manager-controller/0.log" Dec 02 16:28:30 crc kubenswrapper[4900]: I1202 16:28:30.874117 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-grnhn_14f84abe-cdad-4668-acd1-860a7bc5a9d6/cert-manager-cainjector/0.log" Dec 02 16:28:31 crc kubenswrapper[4900]: I1202 16:28:31.001089 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-h4b6p_acf8bd75-e23d-4fc9-99d9-0571bbe33ae3/cert-manager-webhook/0.log" Dec 02 16:28:34 crc kubenswrapper[4900]: I1202 16:28:34.922428 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:28:34 crc kubenswrapper[4900]: E1202 16:28:34.923299 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:28:46 crc kubenswrapper[4900]: I1202 16:28:46.910448 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:28:46 crc kubenswrapper[4900]: E1202 16:28:46.911398 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:28:47 crc kubenswrapper[4900]: I1202 16:28:47.482733 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-95z72_5578f939-25d2-48da-8999-c26293a16f46/nmstate-console-plugin/0.log" Dec 02 16:28:47 crc kubenswrapper[4900]: I1202 16:28:47.534275 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cvpcz_150c147d-317b-48e4-a057-da44c031d144/nmstate-handler/0.log" Dec 02 16:28:47 crc kubenswrapper[4900]: I1202 16:28:47.707941 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-dw7xc_8382e72b-9452-45c7-92bd-dbdf8cca9706/nmstate-metrics/0.log" Dec 02 16:28:47 crc kubenswrapper[4900]: I1202 16:28:47.708572 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-dw7xc_8382e72b-9452-45c7-92bd-dbdf8cca9706/kube-rbac-proxy/0.log" Dec 02 16:28:47 crc kubenswrapper[4900]: I1202 16:28:47.859912 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-2jf4m_7c6f8aa4-bdd4-4050-9af6-a5e2bef44e66/nmstate-operator/0.log" Dec 02 16:28:47 crc kubenswrapper[4900]: I1202 16:28:47.900898 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-rgps6_4aad5874-85e1-463d-aa8b-7736a7f36be6/nmstate-webhook/0.log" Dec 02 16:29:01 crc kubenswrapper[4900]: I1202 16:29:01.910423 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:29:01 crc kubenswrapper[4900]: E1202 16:29:01.911296 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:29:03 crc kubenswrapper[4900]: I1202 16:29:03.086838 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-4w4nt_32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4/kube-rbac-proxy/0.log" Dec 02 16:29:03 crc kubenswrapper[4900]: I1202 16:29:03.426054 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-4w4nt_32340cf3-dbfb-4e8b-bc67-9da7c13ac1f4/controller/0.log" Dec 02 16:29:03 crc kubenswrapper[4900]: I1202 16:29:03.499323 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-frr-files/0.log" Dec 02 16:29:03 crc kubenswrapper[4900]: I1202 16:29:03.701068 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-reloader/0.log" Dec 02 16:29:03 crc kubenswrapper[4900]: I1202 16:29:03.706637 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-frr-files/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.474405 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-metrics/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.530602 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-reloader/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.670049 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-reloader/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.727543 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-frr-files/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.731584 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-metrics/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.784725 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-metrics/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.931875 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-reloader/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.968081 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-frr-files/0.log" Dec 02 16:29:04 crc kubenswrapper[4900]: I1202 16:29:04.973371 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/cp-metrics/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.020373 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/controller/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.202139 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/frr-metrics/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.228065 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/kube-rbac-proxy/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.335601 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/kube-rbac-proxy-frr/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.520211 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/reloader/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.602456 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-qpbtr_cf13f239-289a-466a-8198-b9d0045278ea/frr-k8s-webhook-server/0.log" Dec 02 16:29:05 crc kubenswrapper[4900]: I1202 16:29:05.789530 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c5c9bf76c-rxvh2_bf1a22ef-b575-4d5c-b109-3ec72f7eb657/manager/0.log" Dec 02 16:29:06 crc kubenswrapper[4900]: I1202 16:29:06.390493 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-57dfb79cdb-sqljw_33794e3c-e37e-4e6c-b384-19cea6e2ce59/webhook-server/0.log" Dec 02 16:29:06 crc kubenswrapper[4900]: I1202 16:29:06.642996 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-c4q7g_b7762b7c-1278-4ecd-a268-a87332a08d60/kube-rbac-proxy/0.log" Dec 02 16:29:07 crc kubenswrapper[4900]: I1202 16:29:07.463128 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-c4q7g_b7762b7c-1278-4ecd-a268-a87332a08d60/speaker/0.log" Dec 02 16:29:08 crc kubenswrapper[4900]: I1202 16:29:08.321019 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n75f5_3a65ca05-cf73-4b08-989e-0db3f81747a2/frr/0.log" Dec 02 16:29:14 crc kubenswrapper[4900]: I1202 16:29:14.951734 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:29:14 crc kubenswrapper[4900]: E1202 16:29:14.952967 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:29:15 crc kubenswrapper[4900]: I1202 16:29:15.450885 4900 scope.go:117] "RemoveContainer" containerID="a44540f66e3b3959eb0358560f542f32e363a52a086d6765ab58bf74e2540fea" Dec 02 16:29:21 crc kubenswrapper[4900]: I1202 16:29:21.645845 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/util/0.log" Dec 02 16:29:21 crc kubenswrapper[4900]: I1202 16:29:21.831475 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/util/0.log" Dec 02 16:29:21 crc kubenswrapper[4900]: I1202 16:29:21.838092 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/pull/0.log" Dec 02 16:29:21 crc kubenswrapper[4900]: I1202 16:29:21.875368 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/pull/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.018224 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/util/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.041706 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/extract/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.056416 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931afjmqh_d5eadb57-166f-40ac-aa67-845abc3e919f/pull/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.217383 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/util/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.405526 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/pull/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.406020 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/pull/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.423775 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/util/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.656876 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/extract/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.661859 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/pull/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.661999 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fpf4cl_ceb9bfc0-6b89-467f-b74a-678be8a2df0c/util/0.log" Dec 02 16:29:22 crc kubenswrapper[4900]: I1202 16:29:22.831619 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/util/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.068602 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/util/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.078709 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/pull/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.088416 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/pull/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.573505 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/pull/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.589120 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/util/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.599007 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107qqwg_a46412aa-8251-4e10-ac23-23c749eeca63/extract/0.log" Dec 02 16:29:23 crc kubenswrapper[4900]: I1202 16:29:23.739501 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/util/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.018660 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/pull/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.034818 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/util/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.042368 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/pull/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.211102 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/extract/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.233500 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/util/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.237968 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f836xhr8_8d928ba2-b289-4ede-96e5-a136771b99b1/pull/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.414187 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/extract-utilities/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.613628 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/extract-content/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.624267 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/extract-utilities/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.673113 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/extract-content/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.838263 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/extract-content/0.log" Dec 02 16:29:24 crc kubenswrapper[4900]: I1202 16:29:24.863379 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/extract-utilities/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.105695 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/extract-utilities/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.348760 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/extract-content/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.419758 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/extract-content/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.444426 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/extract-utilities/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.627363 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/extract-utilities/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.639182 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/extract-content/0.log" Dec 02 16:29:25 crc kubenswrapper[4900]: I1202 16:29:25.803859 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5dmz9_788f8fe8-a1c6-4fb2-a117-88ffe447aec2/registry-server/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.115363 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jt5q6_99ea26f5-048d-4410-ba58-83c56333dcc0/registry-server/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.279009 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wfqvd_fe7ce6ee-fda9-4b74-a46d-4918743dbeb8/marketplace-operator/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.288733 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/extract-utilities/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.456285 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/extract-utilities/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.462009 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/extract-content/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.476894 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/extract-content/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.655419 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/extract-content/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.678701 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/extract-utilities/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.728188 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/extract-utilities/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.955030 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hppxd_9872ff30-bf71-4634-becb-6a860eff216f/registry-server/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.970067 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/extract-utilities/0.log" Dec 02 16:29:26 crc kubenswrapper[4900]: I1202 16:29:26.982252 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/extract-content/0.log" Dec 02 16:29:27 crc kubenswrapper[4900]: I1202 16:29:27.012181 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/extract-content/0.log" Dec 02 16:29:27 crc kubenswrapper[4900]: I1202 16:29:27.160055 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/extract-content/0.log" Dec 02 16:29:27 crc kubenswrapper[4900]: I1202 16:29:27.186147 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/extract-utilities/0.log" Dec 02 16:29:27 crc kubenswrapper[4900]: I1202 16:29:27.605155 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ftnmg_14ce31ad-5eb1-449a-9b78-48741f5a05fa/registry-server/0.log" Dec 02 16:29:27 crc kubenswrapper[4900]: I1202 16:29:27.911817 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:29:27 crc kubenswrapper[4900]: E1202 16:29:27.912717 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:29:40 crc kubenswrapper[4900]: I1202 16:29:40.911113 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:29:40 crc kubenswrapper[4900]: E1202 16:29:40.911956 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:29:43 crc kubenswrapper[4900]: I1202 16:29:43.378215 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-qv5tp_4c8af68c-3b2d-44ce-86e7-5d94a1038d5f/prometheus-operator/0.log" Dec 02 16:29:43 crc kubenswrapper[4900]: I1202 16:29:43.542801 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79f9546874-bcmqr_e5fbb4b2-ff82-49df-8e11-16aa2e348fa0/prometheus-operator-admission-webhook/0.log" Dec 02 16:29:43 crc kubenswrapper[4900]: I1202 16:29:43.606904 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-79f9546874-fx89g_18ac83f9-098d-4d3f-ab15-413671561160/prometheus-operator-admission-webhook/0.log" Dec 02 16:29:43 crc kubenswrapper[4900]: I1202 16:29:43.753101 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-w8mjx_186fb8e8-d830-4dc9-8b1a-a596b1348b39/operator/0.log" Dec 02 16:29:43 crc kubenswrapper[4900]: I1202 16:29:43.875497 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-cghkz_7e0d1a5d-af3d-4e3f-add9-e617a98cd95e/perses-operator/0.log" Dec 02 16:29:51 crc kubenswrapper[4900]: I1202 16:29:51.910678 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:29:51 crc kubenswrapper[4900]: E1202 16:29:51.911459 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.167719 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm"] Dec 02 16:30:00 crc kubenswrapper[4900]: E1202 16:30:00.170029 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="registry-server" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.170053 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="registry-server" Dec 02 16:30:00 crc kubenswrapper[4900]: E1202 16:30:00.170094 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="extract-content" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.170100 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="extract-content" Dec 02 16:30:00 crc kubenswrapper[4900]: E1202 16:30:00.170112 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="extract-utilities" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.170119 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="extract-utilities" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.170390 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf09cbfc-09cf-4da8-90b2-5e4e897f8f0f" containerName="registry-server" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.172436 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.176880 4900 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.184735 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm"] Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.215468 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwggh\" (UniqueName: \"kubernetes.io/projected/0b1177ca-ab84-45ee-8344-1565986444d6-kube-api-access-vwggh\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.215590 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b1177ca-ab84-45ee-8344-1565986444d6-secret-volume\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.215727 4900 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.215796 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b1177ca-ab84-45ee-8344-1565986444d6-config-volume\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.317695 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwggh\" (UniqueName: \"kubernetes.io/projected/0b1177ca-ab84-45ee-8344-1565986444d6-kube-api-access-vwggh\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.317776 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b1177ca-ab84-45ee-8344-1565986444d6-secret-volume\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.317840 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b1177ca-ab84-45ee-8344-1565986444d6-config-volume\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.318853 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b1177ca-ab84-45ee-8344-1565986444d6-config-volume\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.337240 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b1177ca-ab84-45ee-8344-1565986444d6-secret-volume\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.342520 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwggh\" (UniqueName: \"kubernetes.io/projected/0b1177ca-ab84-45ee-8344-1565986444d6-kube-api-access-vwggh\") pod \"collect-profiles-29411550-gvcgm\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:00 crc kubenswrapper[4900]: I1202 16:30:00.530924 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.038682 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm"] Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.926604 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkqs6"] Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.929569 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.940872 4900 generic.go:334] "Generic (PLEG): container finished" podID="0b1177ca-ab84-45ee-8344-1565986444d6" containerID="11dfb295c5297d95baaebf7fd9211eeb34def145e668e52436271aa6bff812c8" exitCode=0 Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.940913 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" event={"ID":"0b1177ca-ab84-45ee-8344-1565986444d6","Type":"ContainerDied","Data":"11dfb295c5297d95baaebf7fd9211eeb34def145e668e52436271aa6bff812c8"} Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.940937 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" event={"ID":"0b1177ca-ab84-45ee-8344-1565986444d6","Type":"ContainerStarted","Data":"9f14b22a27a8a4d4299fdf4784c50461d8324ef961f28729f62412aab2280d6d"} Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.942096 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkqs6"] Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.962572 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-utilities\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.962610 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-catalog-content\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:01 crc kubenswrapper[4900]: I1202 16:30:01.962638 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnnr\" (UniqueName: \"kubernetes.io/projected/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-kube-api-access-rsnnr\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.064405 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-utilities\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.064446 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-catalog-content\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.064467 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsnnr\" (UniqueName: \"kubernetes.io/projected/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-kube-api-access-rsnnr\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.064882 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-catalog-content\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.065064 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-utilities\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.092323 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsnnr\" (UniqueName: \"kubernetes.io/projected/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-kube-api-access-rsnnr\") pod \"redhat-marketplace-mkqs6\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.275772 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.832545 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkqs6"] Dec 02 16:30:02 crc kubenswrapper[4900]: I1202 16:30:02.952020 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkqs6" event={"ID":"b5b05cc4-4af6-4a13-ada5-4d493086e7a8","Type":"ContainerStarted","Data":"9ef8c6406408f2bcca14945ba76b427cadcefe2a24ae9d8eb493f61c91793664"} Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.332079 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.499308 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b1177ca-ab84-45ee-8344-1565986444d6-config-volume\") pod \"0b1177ca-ab84-45ee-8344-1565986444d6\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.499378 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b1177ca-ab84-45ee-8344-1565986444d6-secret-volume\") pod \"0b1177ca-ab84-45ee-8344-1565986444d6\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.499443 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwggh\" (UniqueName: \"kubernetes.io/projected/0b1177ca-ab84-45ee-8344-1565986444d6-kube-api-access-vwggh\") pod \"0b1177ca-ab84-45ee-8344-1565986444d6\" (UID: \"0b1177ca-ab84-45ee-8344-1565986444d6\") " Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.499991 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1177ca-ab84-45ee-8344-1565986444d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b1177ca-ab84-45ee-8344-1565986444d6" (UID: "0b1177ca-ab84-45ee-8344-1565986444d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.504697 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1177ca-ab84-45ee-8344-1565986444d6-kube-api-access-vwggh" (OuterVolumeSpecName: "kube-api-access-vwggh") pod "0b1177ca-ab84-45ee-8344-1565986444d6" (UID: "0b1177ca-ab84-45ee-8344-1565986444d6"). InnerVolumeSpecName "kube-api-access-vwggh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.505717 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1177ca-ab84-45ee-8344-1565986444d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b1177ca-ab84-45ee-8344-1565986444d6" (UID: "0b1177ca-ab84-45ee-8344-1565986444d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.602259 4900 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b1177ca-ab84-45ee-8344-1565986444d6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.602305 4900 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b1177ca-ab84-45ee-8344-1565986444d6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.602347 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwggh\" (UniqueName: \"kubernetes.io/projected/0b1177ca-ab84-45ee-8344-1565986444d6-kube-api-access-vwggh\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.964699 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.964733 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411550-gvcgm" event={"ID":"0b1177ca-ab84-45ee-8344-1565986444d6","Type":"ContainerDied","Data":"9f14b22a27a8a4d4299fdf4784c50461d8324ef961f28729f62412aab2280d6d"} Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.965293 4900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f14b22a27a8a4d4299fdf4784c50461d8324ef961f28729f62412aab2280d6d" Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.966520 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerID="0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439" exitCode=0 Dec 02 16:30:03 crc kubenswrapper[4900]: I1202 16:30:03.966562 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkqs6" event={"ID":"b5b05cc4-4af6-4a13-ada5-4d493086e7a8","Type":"ContainerDied","Data":"0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439"} Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.321886 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c528k"] Dec 02 16:30:04 crc kubenswrapper[4900]: E1202 16:30:04.322362 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1177ca-ab84-45ee-8344-1565986444d6" containerName="collect-profiles" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.322384 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1177ca-ab84-45ee-8344-1565986444d6" containerName="collect-profiles" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.322700 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1177ca-ab84-45ee-8344-1565986444d6" containerName="collect-profiles" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.325762 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.335117 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c528k"] Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.449351 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv"] Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.467115 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411505-6bxbv"] Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.524112 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-utilities\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.524211 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrbb\" (UniqueName: \"kubernetes.io/projected/684fafcd-d8d6-4e94-a690-da4649764637-kube-api-access-rkrbb\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.524492 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-catalog-content\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.626488 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-utilities\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.626565 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrbb\" (UniqueName: \"kubernetes.io/projected/684fafcd-d8d6-4e94-a690-da4649764637-kube-api-access-rkrbb\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.626691 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-catalog-content\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.627146 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-catalog-content\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.627490 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-utilities\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.654786 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrbb\" (UniqueName: \"kubernetes.io/projected/684fafcd-d8d6-4e94-a690-da4649764637-kube-api-access-rkrbb\") pod \"community-operators-c528k\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.657632 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:04 crc kubenswrapper[4900]: I1202 16:30:04.927762 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b0959d-7880-4c8a-b4c5-48373b46c779" path="/var/lib/kubelet/pods/09b0959d-7880-4c8a-b4c5-48373b46c779/volumes" Dec 02 16:30:05 crc kubenswrapper[4900]: I1202 16:30:05.136469 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c528k"] Dec 02 16:30:05 crc kubenswrapper[4900]: W1202 16:30:05.142700 4900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684fafcd_d8d6_4e94_a690_da4649764637.slice/crio-92276528c6eb3e521a85c365abd8d77c7ddcfa7ba57419ac3eef988ee0278212 WatchSource:0}: Error finding container 92276528c6eb3e521a85c365abd8d77c7ddcfa7ba57419ac3eef988ee0278212: Status 404 returned error can't find the container with id 92276528c6eb3e521a85c365abd8d77c7ddcfa7ba57419ac3eef988ee0278212 Dec 02 16:30:06 crc kubenswrapper[4900]: I1202 16:30:06.038369 4900 generic.go:334] "Generic (PLEG): container finished" podID="684fafcd-d8d6-4e94-a690-da4649764637" containerID="3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05" exitCode=0 Dec 02 16:30:06 crc kubenswrapper[4900]: I1202 16:30:06.041008 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerDied","Data":"3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05"} Dec 02 16:30:06 crc kubenswrapper[4900]: I1202 16:30:06.041069 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerStarted","Data":"92276528c6eb3e521a85c365abd8d77c7ddcfa7ba57419ac3eef988ee0278212"} Dec 02 16:30:06 crc kubenswrapper[4900]: I1202 16:30:06.087068 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerID="76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4" exitCode=0 Dec 02 16:30:06 crc kubenswrapper[4900]: I1202 16:30:06.087121 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkqs6" event={"ID":"b5b05cc4-4af6-4a13-ada5-4d493086e7a8","Type":"ContainerDied","Data":"76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4"} Dec 02 16:30:06 crc kubenswrapper[4900]: I1202 16:30:06.911049 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:30:06 crc kubenswrapper[4900]: E1202 16:30:06.911778 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:30:08 crc kubenswrapper[4900]: I1202 16:30:08.113774 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkqs6" event={"ID":"b5b05cc4-4af6-4a13-ada5-4d493086e7a8","Type":"ContainerStarted","Data":"01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a"} Dec 02 16:30:08 crc kubenswrapper[4900]: I1202 16:30:08.116963 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerStarted","Data":"b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c"} Dec 02 16:30:08 crc kubenswrapper[4900]: I1202 16:30:08.138581 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkqs6" podStartSLOduration=4.191633487 podStartE2EDuration="7.138559305s" podCreationTimestamp="2025-12-02 16:30:01 +0000 UTC" firstStartedPulling="2025-12-02 16:30:03.968886302 +0000 UTC m=+10049.384700163" lastFinishedPulling="2025-12-02 16:30:06.91581213 +0000 UTC m=+10052.331625981" observedRunningTime="2025-12-02 16:30:08.131076622 +0000 UTC m=+10053.546890473" watchObservedRunningTime="2025-12-02 16:30:08.138559305 +0000 UTC m=+10053.554373166" Dec 02 16:30:09 crc kubenswrapper[4900]: I1202 16:30:09.129222 4900 generic.go:334] "Generic (PLEG): container finished" podID="684fafcd-d8d6-4e94-a690-da4649764637" containerID="b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c" exitCode=0 Dec 02 16:30:09 crc kubenswrapper[4900]: I1202 16:30:09.129820 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerDied","Data":"b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c"} Dec 02 16:30:10 crc kubenswrapper[4900]: I1202 16:30:10.159146 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerStarted","Data":"7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34"} Dec 02 16:30:10 crc kubenswrapper[4900]: I1202 16:30:10.185821 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c528k" podStartSLOduration=2.654411783 podStartE2EDuration="6.185802631s" podCreationTimestamp="2025-12-02 16:30:04 +0000 UTC" firstStartedPulling="2025-12-02 16:30:06.042764407 +0000 UTC m=+10051.458578258" lastFinishedPulling="2025-12-02 16:30:09.574155215 +0000 UTC m=+10054.989969106" observedRunningTime="2025-12-02 16:30:10.179849051 +0000 UTC m=+10055.595662902" watchObservedRunningTime="2025-12-02 16:30:10.185802631 +0000 UTC m=+10055.601616482" Dec 02 16:30:12 crc kubenswrapper[4900]: I1202 16:30:12.277347 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:12 crc kubenswrapper[4900]: I1202 16:30:12.277688 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:12 crc kubenswrapper[4900]: I1202 16:30:12.544256 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:13 crc kubenswrapper[4900]: I1202 16:30:13.264939 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:14 crc kubenswrapper[4900]: I1202 16:30:14.322004 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkqs6"] Dec 02 16:30:14 crc kubenswrapper[4900]: I1202 16:30:14.658797 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:14 crc kubenswrapper[4900]: I1202 16:30:14.658878 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:14 crc kubenswrapper[4900]: I1202 16:30:14.743162 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.215337 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkqs6" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="registry-server" containerID="cri-o://01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a" gracePeriod=2 Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.280223 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.549878 4900 scope.go:117] "RemoveContainer" containerID="064b5e494c9117057043f9fd0c313e37cb87ff2aaf02d40a2a3cd892c6bf5eb1" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.714096 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.790631 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-utilities\") pod \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.790778 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-catalog-content\") pod \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.790846 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsnnr\" (UniqueName: \"kubernetes.io/projected/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-kube-api-access-rsnnr\") pod \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\" (UID: \"b5b05cc4-4af6-4a13-ada5-4d493086e7a8\") " Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.791731 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-utilities" (OuterVolumeSpecName: "utilities") pod "b5b05cc4-4af6-4a13-ada5-4d493086e7a8" (UID: "b5b05cc4-4af6-4a13-ada5-4d493086e7a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.797737 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-kube-api-access-rsnnr" (OuterVolumeSpecName: "kube-api-access-rsnnr") pod "b5b05cc4-4af6-4a13-ada5-4d493086e7a8" (UID: "b5b05cc4-4af6-4a13-ada5-4d493086e7a8"). InnerVolumeSpecName "kube-api-access-rsnnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.811553 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5b05cc4-4af6-4a13-ada5-4d493086e7a8" (UID: "b5b05cc4-4af6-4a13-ada5-4d493086e7a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.893799 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.893845 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsnnr\" (UniqueName: \"kubernetes.io/projected/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-kube-api-access-rsnnr\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:15 crc kubenswrapper[4900]: I1202 16:30:15.893857 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b05cc4-4af6-4a13-ada5-4d493086e7a8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.234387 4900 generic.go:334] "Generic (PLEG): container finished" podID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerID="01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a" exitCode=0 Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.234494 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkqs6" event={"ID":"b5b05cc4-4af6-4a13-ada5-4d493086e7a8","Type":"ContainerDied","Data":"01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a"} Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.234584 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkqs6" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.234611 4900 scope.go:117] "RemoveContainer" containerID="01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.234588 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkqs6" event={"ID":"b5b05cc4-4af6-4a13-ada5-4d493086e7a8","Type":"ContainerDied","Data":"9ef8c6406408f2bcca14945ba76b427cadcefe2a24ae9d8eb493f61c91793664"} Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.288229 4900 scope.go:117] "RemoveContainer" containerID="76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.314331 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkqs6"] Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.323476 4900 scope.go:117] "RemoveContainer" containerID="0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.335457 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkqs6"] Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.424956 4900 scope.go:117] "RemoveContainer" containerID="01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a" Dec 02 16:30:16 crc kubenswrapper[4900]: E1202 16:30:16.426418 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a\": container with ID starting with 01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a not found: ID does not exist" containerID="01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.426478 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a"} err="failed to get container status \"01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a\": rpc error: code = NotFound desc = could not find container \"01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a\": container with ID starting with 01391214b3b0c8af993cba4a4defda59974527e5c54da0a1e537dc3826d6022a not found: ID does not exist" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.426506 4900 scope.go:117] "RemoveContainer" containerID="76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4" Dec 02 16:30:16 crc kubenswrapper[4900]: E1202 16:30:16.427055 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4\": container with ID starting with 76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4 not found: ID does not exist" containerID="76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.427104 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4"} err="failed to get container status \"76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4\": rpc error: code = NotFound desc = could not find container \"76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4\": container with ID starting with 76c14defbcd1f99d1e635485cf38e4c6f5c2d3abdbd50849d4d81aec9b256ef4 not found: ID does not exist" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.427134 4900 scope.go:117] "RemoveContainer" containerID="0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439" Dec 02 16:30:16 crc kubenswrapper[4900]: E1202 16:30:16.427612 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439\": container with ID starting with 0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439 not found: ID does not exist" containerID="0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.427674 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439"} err="failed to get container status \"0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439\": rpc error: code = NotFound desc = could not find container \"0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439\": container with ID starting with 0efee51752728abb82c9fe43cb916d66ac53e4d8b2f76f59ef793df7488a6439 not found: ID does not exist" Dec 02 16:30:16 crc kubenswrapper[4900]: I1202 16:30:16.926103 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" path="/var/lib/kubelet/pods/b5b05cc4-4af6-4a13-ada5-4d493086e7a8/volumes" Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.127153 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c528k"] Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.251036 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c528k" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="registry-server" containerID="cri-o://7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34" gracePeriod=2 Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.863487 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.956224 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-utilities\") pod \"684fafcd-d8d6-4e94-a690-da4649764637\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.956446 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkrbb\" (UniqueName: \"kubernetes.io/projected/684fafcd-d8d6-4e94-a690-da4649764637-kube-api-access-rkrbb\") pod \"684fafcd-d8d6-4e94-a690-da4649764637\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.956544 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-catalog-content\") pod \"684fafcd-d8d6-4e94-a690-da4649764637\" (UID: \"684fafcd-d8d6-4e94-a690-da4649764637\") " Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.960979 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-utilities" (OuterVolumeSpecName: "utilities") pod "684fafcd-d8d6-4e94-a690-da4649764637" (UID: "684fafcd-d8d6-4e94-a690-da4649764637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:30:17 crc kubenswrapper[4900]: I1202 16:30:17.963911 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684fafcd-d8d6-4e94-a690-da4649764637-kube-api-access-rkrbb" (OuterVolumeSpecName: "kube-api-access-rkrbb") pod "684fafcd-d8d6-4e94-a690-da4649764637" (UID: "684fafcd-d8d6-4e94-a690-da4649764637"). InnerVolumeSpecName "kube-api-access-rkrbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.022975 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "684fafcd-d8d6-4e94-a690-da4649764637" (UID: "684fafcd-d8d6-4e94-a690-da4649764637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.059534 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkrbb\" (UniqueName: \"kubernetes.io/projected/684fafcd-d8d6-4e94-a690-da4649764637-kube-api-access-rkrbb\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.059594 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.059619 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684fafcd-d8d6-4e94-a690-da4649764637-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.266007 4900 generic.go:334] "Generic (PLEG): container finished" podID="684fafcd-d8d6-4e94-a690-da4649764637" containerID="7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34" exitCode=0 Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.266060 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerDied","Data":"7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34"} Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.266100 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c528k" event={"ID":"684fafcd-d8d6-4e94-a690-da4649764637","Type":"ContainerDied","Data":"92276528c6eb3e521a85c365abd8d77c7ddcfa7ba57419ac3eef988ee0278212"} Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.266132 4900 scope.go:117] "RemoveContainer" containerID="7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.266300 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c528k" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.316133 4900 scope.go:117] "RemoveContainer" containerID="b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.328204 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c528k"] Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.352014 4900 scope.go:117] "RemoveContainer" containerID="3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.352823 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c528k"] Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.437145 4900 scope.go:117] "RemoveContainer" containerID="7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34" Dec 02 16:30:18 crc kubenswrapper[4900]: E1202 16:30:18.437794 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34\": container with ID starting with 7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34 not found: ID does not exist" containerID="7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.437837 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34"} err="failed to get container status \"7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34\": rpc error: code = NotFound desc = could not find container \"7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34\": container with ID starting with 7377b99cce231a03374e4e00e01d5b4c1126942e92974b70fcb9712e2ec29e34 not found: ID does not exist" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.437862 4900 scope.go:117] "RemoveContainer" containerID="b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c" Dec 02 16:30:18 crc kubenswrapper[4900]: E1202 16:30:18.438300 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c\": container with ID starting with b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c not found: ID does not exist" containerID="b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.438358 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c"} err="failed to get container status \"b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c\": rpc error: code = NotFound desc = could not find container \"b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c\": container with ID starting with b8d4a15de404e1d94760a3ccaf2918728ab8ad1c2383a36fe14a9dddf92de23c not found: ID does not exist" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.438397 4900 scope.go:117] "RemoveContainer" containerID="3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05" Dec 02 16:30:18 crc kubenswrapper[4900]: E1202 16:30:18.439140 4900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05\": container with ID starting with 3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05 not found: ID does not exist" containerID="3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.439373 4900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05"} err="failed to get container status \"3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05\": rpc error: code = NotFound desc = could not find container \"3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05\": container with ID starting with 3ecc155667147d0c3c116f9ef77b6ec927dc526f471de95abe1081c577f4df05 not found: ID does not exist" Dec 02 16:30:18 crc kubenswrapper[4900]: I1202 16:30:18.931521 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684fafcd-d8d6-4e94-a690-da4649764637" path="/var/lib/kubelet/pods/684fafcd-d8d6-4e94-a690-da4649764637/volumes" Dec 02 16:30:21 crc kubenswrapper[4900]: I1202 16:30:21.911212 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:30:21 crc kubenswrapper[4900]: E1202 16:30:21.912700 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:30:32 crc kubenswrapper[4900]: I1202 16:30:32.920396 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:30:32 crc kubenswrapper[4900]: E1202 16:30:32.924073 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:30:46 crc kubenswrapper[4900]: I1202 16:30:46.910929 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:30:46 crc kubenswrapper[4900]: E1202 16:30:46.911835 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:31:00 crc kubenswrapper[4900]: I1202 16:31:00.911318 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:31:00 crc kubenswrapper[4900]: E1202 16:31:00.912613 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:31:14 crc kubenswrapper[4900]: I1202 16:31:14.948266 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:31:14 crc kubenswrapper[4900]: E1202 16:31:14.949907 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.148469 4900 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-djxlm"] Dec 02 16:31:25 crc kubenswrapper[4900]: E1202 16:31:25.149743 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="extract-content" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.149762 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="extract-content" Dec 02 16:31:25 crc kubenswrapper[4900]: E1202 16:31:25.149790 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="registry-server" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.149799 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="registry-server" Dec 02 16:31:25 crc kubenswrapper[4900]: E1202 16:31:25.149818 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="extract-utilities" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.149827 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="extract-utilities" Dec 02 16:31:25 crc kubenswrapper[4900]: E1202 16:31:25.149857 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="registry-server" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.149865 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="registry-server" Dec 02 16:31:25 crc kubenswrapper[4900]: E1202 16:31:25.149878 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="extract-content" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.149887 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="extract-content" Dec 02 16:31:25 crc kubenswrapper[4900]: E1202 16:31:25.149896 4900 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="extract-utilities" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.149903 4900 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="extract-utilities" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.150155 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="684fafcd-d8d6-4e94-a690-da4649764637" containerName="registry-server" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.150173 4900 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b05cc4-4af6-4a13-ada5-4d493086e7a8" containerName="registry-server" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.152159 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.163567 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-djxlm"] Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.360421 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-catalog-content\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.360522 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-utilities\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.360551 4900 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8pl\" (UniqueName: \"kubernetes.io/projected/7b375324-5fee-491f-8c91-7cc41edba76f-kube-api-access-vr8pl\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.462117 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-utilities\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.462180 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8pl\" (UniqueName: \"kubernetes.io/projected/7b375324-5fee-491f-8c91-7cc41edba76f-kube-api-access-vr8pl\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.462400 4900 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-catalog-content\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.462956 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-catalog-content\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.463074 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-utilities\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.574457 4900 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8pl\" (UniqueName: \"kubernetes.io/projected/7b375324-5fee-491f-8c91-7cc41edba76f-kube-api-access-vr8pl\") pod \"certified-operators-djxlm\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:25 crc kubenswrapper[4900]: I1202 16:31:25.776022 4900 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:26 crc kubenswrapper[4900]: I1202 16:31:26.276046 4900 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-djxlm"] Dec 02 16:31:26 crc kubenswrapper[4900]: I1202 16:31:26.298870 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djxlm" event={"ID":"7b375324-5fee-491f-8c91-7cc41edba76f","Type":"ContainerStarted","Data":"c7e531d006459f5e5b456fe50b27477413b0551580497171a8305db73b419208"} Dec 02 16:31:27 crc kubenswrapper[4900]: I1202 16:31:27.317198 4900 generic.go:334] "Generic (PLEG): container finished" podID="7b375324-5fee-491f-8c91-7cc41edba76f" containerID="c17e4dfac30144d60ca248152cad87e6beb3f2b4d99978046f604e66e0ee6212" exitCode=0 Dec 02 16:31:27 crc kubenswrapper[4900]: I1202 16:31:27.317291 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djxlm" event={"ID":"7b375324-5fee-491f-8c91-7cc41edba76f","Type":"ContainerDied","Data":"c17e4dfac30144d60ca248152cad87e6beb3f2b4d99978046f604e66e0ee6212"} Dec 02 16:31:27 crc kubenswrapper[4900]: I1202 16:31:27.321933 4900 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 16:31:27 crc kubenswrapper[4900]: I1202 16:31:27.910497 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:31:27 crc kubenswrapper[4900]: E1202 16:31:27.911095 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:31:29 crc kubenswrapper[4900]: I1202 16:31:29.368756 4900 generic.go:334] "Generic (PLEG): container finished" podID="7b375324-5fee-491f-8c91-7cc41edba76f" containerID="98fb7fe4891231272ef931b064fb047f588862b39d2cef90126abde16d604fda" exitCode=0 Dec 02 16:31:29 crc kubenswrapper[4900]: I1202 16:31:29.368822 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djxlm" event={"ID":"7b375324-5fee-491f-8c91-7cc41edba76f","Type":"ContainerDied","Data":"98fb7fe4891231272ef931b064fb047f588862b39d2cef90126abde16d604fda"} Dec 02 16:31:30 crc kubenswrapper[4900]: I1202 16:31:30.381280 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djxlm" event={"ID":"7b375324-5fee-491f-8c91-7cc41edba76f","Type":"ContainerStarted","Data":"24cd9d25e820e77d0c06b00de796489de470a73e53d732e65b0a4f9b8838e90e"} Dec 02 16:31:30 crc kubenswrapper[4900]: I1202 16:31:30.411848 4900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-djxlm" podStartSLOduration=2.871905552 podStartE2EDuration="5.411827253s" podCreationTimestamp="2025-12-02 16:31:25 +0000 UTC" firstStartedPulling="2025-12-02 16:31:27.321363197 +0000 UTC m=+10132.737177088" lastFinishedPulling="2025-12-02 16:31:29.861284928 +0000 UTC m=+10135.277098789" observedRunningTime="2025-12-02 16:31:30.402387884 +0000 UTC m=+10135.818201735" watchObservedRunningTime="2025-12-02 16:31:30.411827253 +0000 UTC m=+10135.827641104" Dec 02 16:31:35 crc kubenswrapper[4900]: I1202 16:31:35.776248 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:35 crc kubenswrapper[4900]: I1202 16:31:35.777074 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:35 crc kubenswrapper[4900]: I1202 16:31:35.859943 4900 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:36 crc kubenswrapper[4900]: I1202 16:31:36.554355 4900 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:36 crc kubenswrapper[4900]: I1202 16:31:36.619492 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-djxlm"] Dec 02 16:31:38 crc kubenswrapper[4900]: I1202 16:31:38.498932 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-djxlm" podUID="7b375324-5fee-491f-8c91-7cc41edba76f" containerName="registry-server" containerID="cri-o://24cd9d25e820e77d0c06b00de796489de470a73e53d732e65b0a4f9b8838e90e" gracePeriod=2 Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.514072 4900 generic.go:334] "Generic (PLEG): container finished" podID="7b375324-5fee-491f-8c91-7cc41edba76f" containerID="24cd9d25e820e77d0c06b00de796489de470a73e53d732e65b0a4f9b8838e90e" exitCode=0 Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.514106 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djxlm" event={"ID":"7b375324-5fee-491f-8c91-7cc41edba76f","Type":"ContainerDied","Data":"24cd9d25e820e77d0c06b00de796489de470a73e53d732e65b0a4f9b8838e90e"} Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.892253 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.994468 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-catalog-content\") pod \"7b375324-5fee-491f-8c91-7cc41edba76f\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.994636 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-utilities\") pod \"7b375324-5fee-491f-8c91-7cc41edba76f\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.994811 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr8pl\" (UniqueName: \"kubernetes.io/projected/7b375324-5fee-491f-8c91-7cc41edba76f-kube-api-access-vr8pl\") pod \"7b375324-5fee-491f-8c91-7cc41edba76f\" (UID: \"7b375324-5fee-491f-8c91-7cc41edba76f\") " Dec 02 16:31:39 crc kubenswrapper[4900]: I1202 16:31:39.996168 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-utilities" (OuterVolumeSpecName: "utilities") pod "7b375324-5fee-491f-8c91-7cc41edba76f" (UID: "7b375324-5fee-491f-8c91-7cc41edba76f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.007961 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b375324-5fee-491f-8c91-7cc41edba76f-kube-api-access-vr8pl" (OuterVolumeSpecName: "kube-api-access-vr8pl") pod "7b375324-5fee-491f-8c91-7cc41edba76f" (UID: "7b375324-5fee-491f-8c91-7cc41edba76f"). InnerVolumeSpecName "kube-api-access-vr8pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.044134 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b375324-5fee-491f-8c91-7cc41edba76f" (UID: "7b375324-5fee-491f-8c91-7cc41edba76f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.097308 4900 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.097345 4900 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b375324-5fee-491f-8c91-7cc41edba76f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.097359 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr8pl\" (UniqueName: \"kubernetes.io/projected/7b375324-5fee-491f-8c91-7cc41edba76f-kube-api-access-vr8pl\") on node \"crc\" DevicePath \"\"" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.530919 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djxlm" event={"ID":"7b375324-5fee-491f-8c91-7cc41edba76f","Type":"ContainerDied","Data":"c7e531d006459f5e5b456fe50b27477413b0551580497171a8305db73b419208"} Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.531288 4900 scope.go:117] "RemoveContainer" containerID="24cd9d25e820e77d0c06b00de796489de470a73e53d732e65b0a4f9b8838e90e" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.531489 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djxlm" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.587719 4900 scope.go:117] "RemoveContainer" containerID="98fb7fe4891231272ef931b064fb047f588862b39d2cef90126abde16d604fda" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.593109 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-djxlm"] Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.645583 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-djxlm"] Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.910363 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:31:40 crc kubenswrapper[4900]: E1202 16:31:40.910713 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:31:40 crc kubenswrapper[4900]: I1202 16:31:40.931734 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b375324-5fee-491f-8c91-7cc41edba76f" path="/var/lib/kubelet/pods/7b375324-5fee-491f-8c91-7cc41edba76f/volumes" Dec 02 16:31:41 crc kubenswrapper[4900]: I1202 16:31:41.312491 4900 scope.go:117] "RemoveContainer" containerID="c17e4dfac30144d60ca248152cad87e6beb3f2b4d99978046f604e66e0ee6212" Dec 02 16:31:46 crc kubenswrapper[4900]: I1202 16:31:46.604995 4900 generic.go:334] "Generic (PLEG): container finished" podID="cea334bc-bbda-43e6-999e-1d1827b0b880" containerID="8c67ea529759be5701a5d647ea95dada365492676b30b5c893e7537bfb6614c9" exitCode=0 Dec 02 16:31:46 crc kubenswrapper[4900]: I1202 16:31:46.605093 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hgg9b/must-gather-n49lx" event={"ID":"cea334bc-bbda-43e6-999e-1d1827b0b880","Type":"ContainerDied","Data":"8c67ea529759be5701a5d647ea95dada365492676b30b5c893e7537bfb6614c9"} Dec 02 16:31:46 crc kubenswrapper[4900]: I1202 16:31:46.606184 4900 scope.go:117] "RemoveContainer" containerID="8c67ea529759be5701a5d647ea95dada365492676b30b5c893e7537bfb6614c9" Dec 02 16:31:47 crc kubenswrapper[4900]: I1202 16:31:47.219255 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hgg9b_must-gather-n49lx_cea334bc-bbda-43e6-999e-1d1827b0b880/gather/0.log" Dec 02 16:31:52 crc kubenswrapper[4900]: I1202 16:31:52.910449 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:31:52 crc kubenswrapper[4900]: E1202 16:31:52.911460 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.310486 4900 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hgg9b/must-gather-n49lx"] Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.311029 4900 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hgg9b/must-gather-n49lx" podUID="cea334bc-bbda-43e6-999e-1d1827b0b880" containerName="copy" containerID="cri-o://7bf57165c387cb78f920d2a37cc70a32b14e035c1640134e400fa6512e1ff6fa" gracePeriod=2 Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.321597 4900 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hgg9b/must-gather-n49lx"] Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.725170 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hgg9b_must-gather-n49lx_cea334bc-bbda-43e6-999e-1d1827b0b880/copy/0.log" Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.726049 4900 generic.go:334] "Generic (PLEG): container finished" podID="cea334bc-bbda-43e6-999e-1d1827b0b880" containerID="7bf57165c387cb78f920d2a37cc70a32b14e035c1640134e400fa6512e1ff6fa" exitCode=143 Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.862531 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hgg9b_must-gather-n49lx_cea334bc-bbda-43e6-999e-1d1827b0b880/copy/0.log" Dec 02 16:31:55 crc kubenswrapper[4900]: I1202 16:31:55.863248 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.023034 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/cea334bc-bbda-43e6-999e-1d1827b0b880-kube-api-access-jj4lr\") pod \"cea334bc-bbda-43e6-999e-1d1827b0b880\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.023275 4900 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cea334bc-bbda-43e6-999e-1d1827b0b880-must-gather-output\") pod \"cea334bc-bbda-43e6-999e-1d1827b0b880\" (UID: \"cea334bc-bbda-43e6-999e-1d1827b0b880\") " Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.030053 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea334bc-bbda-43e6-999e-1d1827b0b880-kube-api-access-jj4lr" (OuterVolumeSpecName: "kube-api-access-jj4lr") pod "cea334bc-bbda-43e6-999e-1d1827b0b880" (UID: "cea334bc-bbda-43e6-999e-1d1827b0b880"). InnerVolumeSpecName "kube-api-access-jj4lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.126213 4900 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj4lr\" (UniqueName: \"kubernetes.io/projected/cea334bc-bbda-43e6-999e-1d1827b0b880-kube-api-access-jj4lr\") on node \"crc\" DevicePath \"\"" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.254249 4900 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea334bc-bbda-43e6-999e-1d1827b0b880-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cea334bc-bbda-43e6-999e-1d1827b0b880" (UID: "cea334bc-bbda-43e6-999e-1d1827b0b880"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.331281 4900 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cea334bc-bbda-43e6-999e-1d1827b0b880-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.738674 4900 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hgg9b_must-gather-n49lx_cea334bc-bbda-43e6-999e-1d1827b0b880/copy/0.log" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.739212 4900 scope.go:117] "RemoveContainer" containerID="7bf57165c387cb78f920d2a37cc70a32b14e035c1640134e400fa6512e1ff6fa" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.739275 4900 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hgg9b/must-gather-n49lx" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.776276 4900 scope.go:117] "RemoveContainer" containerID="8c67ea529759be5701a5d647ea95dada365492676b30b5c893e7537bfb6614c9" Dec 02 16:31:56 crc kubenswrapper[4900]: I1202 16:31:56.921116 4900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea334bc-bbda-43e6-999e-1d1827b0b880" path="/var/lib/kubelet/pods/cea334bc-bbda-43e6-999e-1d1827b0b880/volumes" Dec 02 16:32:07 crc kubenswrapper[4900]: I1202 16:32:07.910834 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:32:07 crc kubenswrapper[4900]: E1202 16:32:07.911484 4900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ngwgq_openshift-machine-config-operator(1c8f7b18-f260-4beb-b4ff-0af7e505c7d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" Dec 02 16:32:19 crc kubenswrapper[4900]: I1202 16:32:19.910614 4900 scope.go:117] "RemoveContainer" containerID="fbf48a20b84bbef3ffe769550b31c7ff073b1da163884d554b205d8071f83cb3" Dec 02 16:32:21 crc kubenswrapper[4900]: I1202 16:32:21.052884 4900 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" event={"ID":"1c8f7b18-f260-4beb-b4ff-0af7e505c7d1","Type":"ContainerStarted","Data":"50b87d8873bc9453709b9e61ae711c618371947cecc26eabffb7cbab241be090"} Dec 02 16:34:45 crc kubenswrapper[4900]: I1202 16:34:45.116402 4900 patch_prober.go:28] interesting pod/machine-config-daemon-ngwgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 16:34:45 crc kubenswrapper[4900]: I1202 16:34:45.116910 4900 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngwgq" podUID="1c8f7b18-f260-4beb-b4ff-0af7e505c7d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"